American Cinematographer Manual 10° PDF
American Cinematographer Manual 10° PDF
American Cinematographer Manual 10° PDF
Cinematographer Manual
Tenth Edition
ISBN 978-1-4675-6830-2
Y ou hold in your hands the result of five years of thought, debate, and
inspiration. When Stephen Burum, ASC, asked me to be the editor of this
10th edition of the venerable American Cinematographer Manual, the industry
was in the birth throes of transition; digital intermediates were the exception and
not the rule, we still used the term video rather than digital, and 4K as a viable
production and post format was far beyond our reach. All these changes and
many more came in rapid succession as we labored to bring this book to press.
No sooner had we completed an article when it had to be updated due to
sweeping advances in technology.
I am at heart a low-tech person. I like things simple. I questioned whether I
was even the right person to be taking this book on. But in a strange way, it
made sense. If I could design the manual in a manner that made sense to me,
then the information it contained would be accesible to a wide spectrum of
professional and prosumer image makers. Cinematographers today need to be
closet scientists in order to decipher the tools they have at their disposal, but all
those technologies need not be daunting; they can be fun to explore and exciting
to utilize. Now more than ever, the dreams of a whole new generation can be
made into real moving images. This edition contains some of the most
comprehensive information on digital that you will find anywhere, but it doesn’t
leave behind the essential building blocks of film technology, which is at its
highest level of development. Where we are now is really having the best of both
worlds.
When you embark on a journey to a new world, it’s best to take along a crew
who know the territory. The contributors to this edition have proven to be the
most helpful and dedicated group of scientists, artists and craftspeople one could
possibly hope to assemble. Thanks go to Jim Branch, Curtis Clark, ASC;
Richard Crudo, ASC; Dan Curry; Linwood G. Dunn, ASC; Richard Edlund,
ASC; Jonathan Erland; Jon Fauer, ASC; Ray Feeney; Tom Fraser; Taz
Goldstein; Colin Green and the Previsualization Society; Frieder Hochheim;
Michael Hofstein; Bill Hogan; John Hora, ASC; Rob Hummel; Steve Irwin;
Kent H. Jorgensen; Frank Kay; Glenn Kennel; Jon Kranhouse; Lou Levinson;
Andy Maltz and the AMPAS Science and Technology Council; Vincent Matta;
Tak Miyagishima; David Morin; M. David Mullen, ASC; Dennis Muren, ASC;
Iain A. Neil; Marty Ollstein; Josh Pines; Steven Poster, ASC; Sarah Priestnall;
David Reisner; Pete Romano, ASC; Andy Romanoff; Dr. Rod Ryan; Nic Sadler
and Chemical Wedding; Bill Taylor, ASC; Ira Tiffen and Evans Wetmore.
Special thanks go to Iain Stasukevich for his assistance in research, Lowell
Peterson, ASC, Jamie Anderson, ASC and King Greenspon for their
proofreading skills, and Deeann Hoff and Mark McDougal for handling the
layout of the book.
Extra special thanks go to Brett Grauman, general manager of the ASC, Patty
Armacost, events coordinator, Delphine Figueras, my assistant when I was
handling being ASC president while trying to finish this book, Saul Molina and
Alex Lopez for their expertise in marketing and events management, Owen
Roizman, ASC, who is the heart, soul and inspiration for the organization,
George Spiro Dibie, ASC, my mentor and friend, Martha Winterhalter, whose
knowledge of what we do and how to convey it to the world knows no bounds,
and Gina Goi, my wife, for her love and support during my many twilight
editing sessions.
Enjoy the Manual. Go make movies.
Foreword
Origins of the American Society of Cinematographers
Responsibilities of the Cinematographer
Summary of Formats
Tak Miyagishima
Basic Digtal Concepts
Marty Ollstein
A Primer for Evaluating Digital Motion Picture Cameras
Rob Hummel
Digtal Cinematography on a Budget
M. David Mullen, ASC
Putting the Image on Film
Rob Hummel
Comparisons of 35mm 1.85, Anamorphic and Super 35 Film Formats
Rob Hummel
Anamorphic Cinematography
John Hora, ASC
Exposure Meters
Jim Branch
Lenses
Iain A. Neil
Camera Filters
Ira Tiffen
Camera-Stabilizing Systems
Previsualization
Colin Green
3-D Stereoscopic Cinematography
Rob Hummel
Day-for-Night, Infrared and Ultraviolet Cinematography
Dr. Rod Ryan
Aerial Cinematography
Jon Kranhouse
Underwater Cinematography
Pete Romano, ASC
Arctic and Tropical Cinematography
Filming Television and Computer Displays
Bill Hogan and Steve Irwin
Digital Postproduction for Feature Films
Glenn Kennel and Sarah Priestnall
ASC Color Decision List
David Reisner and Josh Pines
The Academy of Motion Picture Arts and Sciences
Academy Color Encoding System
Curtis Clark, ASC and Andy Maltz
The Cinematographer and the Laboratory
Rob Hummel
Emulsion Testing
Steven Poster, ASC
Finding Your Own Printer Light
Richard Crudo, ASC
Adjusting Printer Lights to Match Sample Clips
Bill Taylor, ASC
Cinemagic of the Optical Printer
Linwood G. Dunn, ASC
Motion-Control Cinematography
Richard Edlund, ASC
Greenscreen and Bluescreen Photography
Bill Taylor, ASC and Petro Vlahos
Photographing Miniatures
Dennis Muren, ASC
In-Camera Compositing of Miniatures with Full-Scale
Live-Action Actors
Dan Curry
Light Sources, Luminaires and Lighting Filters
LED Lighting For Motion Picture Production
Frieder Hochheim
An Introduction to Digital Terminology
Marty Ollstein and Levie Isaacks, ASC
Safety On The Set
Kent H. Jorgensen and Vincent Matta
Preparation of Motion Picture Film Camera Equipment
Marty Ollstein, Michael Hofstein & Tom Fraser
Preparation of Digital Camera Equipment
Marty Ollstein
Camera-Support Systems
Andy Romanoff, Frank Kay and Kent H. Jorgensen
Camera Section
Jon Fauer, ASC and M. David Mullen, ASC
35mm
16mm
Super 8mm
65mm
VistaVision 35mm
Imax
Digital Cameras
Onboard recorders
Table Reference
Cine Lens List
Formulas
Evans Wetmore
Lenses
Extreme Close-Up
Filters
Mired Shift Values
Light Source
Color Balancing
Handheld Apps for Production
Taz Goldstein
SunPATH
Film Stocks
Incident Light
EI Reduction
T-Stop Compensation
Shutter Compensation
Shutter Speeds
Footage Tables
16mm/35mm–Frame Totalizers
Panning Speeds
Time/Speed Effects
Projection/Process
Quick Picture Monitor Set-Up
Lou Levinson
Lighting Fixture Intensity
Further Reference
Index
Origins of theAmerican Society of Cinematographers
F or over 93 years, the ASC has remained true to its ideals: loyalty, progress,
artistry. Reverence for the past and a commitment to the future have made a
potent and lasting combination in a world of shifting values and uncertain
motives.
The American Society of Cinematographers received its charter from the State
of California in January 1919 and is the oldest continuously operating motion
picture society in the world. Its declaredpurpose still resonates today: “to
advance the art of cinematography through artistry and technological progress,
to exchange ideas and to cement a closer relationship among cinematographers.”
The origins of the ASC lie in two clubs founded by cinematographers in 1913.
The Cinema Camera Club was started in New York City by three cameramen
from the Thomas A. Edison Studio: Phil Rosen, Frank Kugler and Lewis W.
Physioc. They decided to form a fraternity to establish professional standards,
encourage the manufacture of better equipment and seek recognition as creative
artists. Meanwhile, the similarly conceived Static Club was formed in Los
Angeles. When Rosen came to the West Coast five years later, he and Charles
Rosher combined the clubs. The ASC now has more than 340 active and
associate members.
The first ASC screen credit was given to charter member Joseph August when
he photographed a William S. Hart picture in 1919.
American Society of Cinematographers’ clubhouse.
The year after its charter, ASC began publishing American Cinematographer
magazine, which ever since has served as the club’s foremost means of
advancing the art.
The ASC has been very active in recent years in expressing concern about
choices for Advanced Television (ATV), ranging from the choice of aspect ratio
to pushing for the abandonment of interlace displays. At the invitation of the
House and Senate in Washington, D.C., members of the ASC have been asked to
inform and advise legislators on these issues.
Currently our technology committee has created a standard test (StEM) for
digital cinema. They are advising the industry on standards in both production
and postproduction for digital capture, manipulation and presentation.
The ASC is not a labor union or guild, but is an educational, cultural and
professional organization. Membership is possible by invitation and is extended
only to directors of photography with distinguished credits in the industry.
—George E. Turner
Responsibilities Of The Cinematographer
APERTURE SPECIFICATIONS
35mm Camera – Spherical Lens
Academy Camera Aperture .866″ X .630″ 22mm X 16mm
by Marty Ollstein
More recently, to enable the use of a larger chip (matching or exceeding the
35mm frame size) that would provide a reduced depth of field and allow the use
of the large inventory of motion-picture lenses (optimized for the 35mm frame),
camera manufacturers have moved to a single-chip format.
Single-chip cameras, such as the ARRI Alexa and RED Epic, contain a single
large chip. No prism block is needed to separate the color rays from the
spectrum. Image color is created by placing tiny red, green and blue filters over
each sensor on the chip in a particular “mosaic” pattern, then using software
matched to that pattern to calculate color values for each pixel. Each camera uses
their own proprietary chip design, mosaic filter pattern, selection of filter color,
and dedicated software—all of which have a significant effect on the
characteristics of the image recorded.
The most widely used mosaic filter pattern for single-chip cameras is the
Bayer pattern. (See Figure 2.) The ARRI Alexa and RED Epic both use the
Bayer pattern for their chip. This pattern uses a series of square 2x2 matrices of
filtered photo-receptor sites—two green, one red and one blue. Also called
RGBG, it is 50% green, 25% red and 25% blue. This allocation mimics the
physiology of the human eye which is more sensitive to green light. These
groups of four filtered sensors are repeated throughout the chip. Dedicated
proprietary software interprets the signal (the de-Bayering process) from each
sensor site, taking into account the particular spectral transmission of its’ color
filter along with the values of the adjacent sites in the matrix, and assigns a color
and brightness value to each pixel of the recorded image. The image data can be
recorded before this digital conversion (de-mosaic or de-Bayer process) is
performed. This ‘pre-conversion’ format is called raw, and yields more data and
a higher quality image. (See Figure 6.)
The choice of the tiny color filters on the sensors—whether narrow spectrum
or wide band—has a significant effect on the dynamic range and color saturation
of the image captured. A wider-band filter leaves the sensor more sensitive to
light, yielding a wider dynamic range and higher native ISO. But the color
recorded by that wide-band filtered sensor is less true and less saturated.
The Panavision Genesis camera uses an RGB stripe mosaic pattern. This
design uses a 2x3 matrix of filtered sensors (two each of red, green and blue) to
measure the data that determines the value of each pixel. In this case, the data is
“oversampled”—six sensors contribute data to define each single pixel in the
recorded image.
The SONY F65 uses a new mosaic pattern that provides red, green, and blue
data for every pixel of a recorded 4K image. The higher resolution 8K sensor
array is rotated at a 45-degree angle so as to place the filtered sensors in position
to measure all three color values for each pixel in a recorded image, producing a
‘true’ 4K RGB output image.
3) RESOLUTION
As the smallest element of a digital image, the pixel represents the limit of
detail that can be displayed. An image composed of a small number of pixels can
only show a rough approximation of a scene, with little detail. The more pixels
used to display an image, the finer the detail that can be revealed. Resolution is
the measure of the finest detail visible in a displayed image, and is defined
numerically by the number of pixels recorded in the image raster—NOT by the
number of sensors in the camera chip. This distinction has created some
confusion and controversy in the resolution claims of some camera
manufacturers.
Camera resolution is commonly defined by the number of lines of pixels (scan
lines) it records. The standard HD camera records 1080 lines, although some
cameras that record 720 lines are also considered HD. An increase in the pixel
line count will produce a proportional increase in resolution and representation
of fine detail. Image resolution is expressed by two numbers: columns (or pixels
per line) x lines. The HD standard image is defined as being 1920x1080 pixels.
A doubling of pixel lines and pixels per line (as the change from 2K to 4K),
increases the total pixel count by a factor of 4, requiring four times the memory
to store the image data. However, the MTF (modulation transfer function, an
optical measure of line-pair resolution) only doubles.
Some of the most common image-resolution standards include:
• Standard definition analog NTSC = 640x480 (1.33:1)
• HD = 1920x1080 (or 1280x720) (1.78:1)
• 2K = 2048 pixels per line (line count varies with aspect ratio)
• 4K = 4096 pixels per line
Pixel count is not the only element in the imaging chain that affects picture
resolution. Lens quality, precise registration of the chips in a three-chip camera,
and image scaling and resizing conversions also affect image resolution.
4) EXPOSURE
Film has a greater dynamic range, or range of usable f-stops, than most digital
formats. And due to the response of film dyes and silver to exposure extremes,
which causes a gradual “rounding off” of values at either end of the tonal scale
(highlights and shadows), there is a perceived extension of the dynamic range.
Shadows merge smoothly into darkness, and highlights fade gradually (or
“bloom”) into white.
Digital images, however, “clip” at either end of the tonal scale—the shadows
drop off abruptly into solid black, and highlights cut off into flat, white areas
with no definition. Clipping occurs when the exposure moves beyond the
specific threshold which is determined by the sensitivity and capability of the
camera or recording device.
A cinematographer can avoid clipping a digital image by monitoring and
controlling the light levels recorded in a scene. There are several useful digital
tools available for monitoring exposure in a scene. A waveform monitor displays
the exposure levels across a scene, read by the camera from left to right on a
scale of 0–100 IRE (Institute of Radio Engineers). Generally, 0 IRE defines total
black and 100 IRE defines total white, indicating the maximum amount of
voltage that the system can handle. If the levels flatten out at either end of the
scale (0 or 100), the image will clip, and no detail will be recorded in those
areas. NTSC defines 7.5 IRE as black (called the “pedestal” in post, and “setup”
on the set). The waveform monitor displays the pedestal and peak white levels,
and indicates when clipping occurs.
Another tool is the Histogram, which graphically displays the distribution of
light values in a scene, from black to white. Basically a bar chart, a histogram
indicates the proportion of image area (y axis) occupied by each level of
brightness from 0 IRE to 100 IRE (x axis). With a clear indicator of clipping at
either end of the scale (often a red line), it is useful for determining whether the
scene fits within the camera’s dynamic range.
Some cameras record light levels up to 110 IRE. These brighter values can be
used throughout the post-production process, then brought back down to a
“legal” level to avoid being hard-clipped at 100 IRE when broadcast on
television.
A handy exposure tool for digital production allows the user to quickly bring
light levels in a scene within the safe range to avoid clipping. Developed by
David Stump, ASC, the device consists of a hollow sphere (diameter about 1.5
feet) painted glossy white, with a hole in one side (diameter about 2 inches) that
reveals a dark interior painted matte black. To use on the set, place the device in
the brightest area of the scene, usually the position of the main subject. Adjust
the lighting and camera exposure so that the specular (shiny) white highlight on
the white ball registers under 100 or 110 IRE, and the black hole registers 0 IRE.
Some digital cameras have software tools that help protect the shadow and
highlight detail or, if clipping is unavoidable, soften or round off the edge of the
clipping. In the toe or shadow region, the technique involves the manipulation of
the black pedestal (the level at which the image turns black) and black stretch.
Black stretch flattens the curve of the Toe, lowering the contrast and putting
more levels (and subtlety) into the lower part of the tone scale. The resulting
effect of black stretch is to reveal more shadow detail. Going the opposite
direction, “crushing the blacks” raises the slope of the Toe curve and compresses
the tonal values in the shadows. Raising the black Pedestal turns more of the
shadow area to pure black. In the highlight region, the “soft clip” or “knee”
function compresses the values near 100 IRE, rounding off the exposure curve,
allowing more highlight detail to be recorded, and giving a more pleasing shape
to the edges of a clip in the very bright areas in the frame.
5) GAMMA AND LOG/LIN
Film dyes have a logarithmic response to exposure. The traditional
sensitometric curve, used to describe the behavior of film stocks, plots density as
the y coordinate and log exposure as the x coordinate. The resulting S curve,
with a shallow slope in both the toe and shoulder regions, provides more shadow
and highlight detail. Logarithmic code values are an important characteristic of
what is considered film color space and the “film look.” A logarithmic
representation of light values is also typically used when scanning film material
for digital intermediate or visual-effects work. And certain current digital-
production cameras, such as the Arri Alexa and Sony F65, both provide log
curves as a choice for image acquisition.
CRT (cathode ray tube) monitors have a nonlinear response to any input—the
intensity of the image displayed is not directly proportional to the video signal
input. This response is called gamma, contrast or a “power function” and is
generally considered to be 2.6 for HD monitors.
Digital video cameras capture a scene as analog voltage and then digitize it
with an A/D converter to create linear digital-code values. To view these
unprocessed linear code values would require a linear display whose response
was directly proportional to its input signal. But the CRT is not a linear display;
its gamma or power function curve gives a nonlinear response to input signals.
Linear digital code values do not display properly on a CRT monitor. The tones
appear desaturated with very low contrast. To generate a video signal that will
display properly on a CRT, video cameras apply a gamma correction equal to the
reciprocal of the CRT power function.
To differentiate video color space from the logarithmic film color space, video
is often referred to as “linear.” This is inaccurate, however, due to the nonlinear
gamma correction applied to the video signal. Video color space should more
accurately be called a “power function” system.
As use of the CRT has decreased, new display devices have been developed
that accept a wide range of input signals. No longer constrained by the limits of
the CRT, digital cameras have been built to record image data without applying
the gamma correction of video. Different input curves have been used in the
cameras, often with some version of the log characteristic curve of film, such as
ARRI’s Log C or SONY’s S-log. Image data recorded in these formats retains
more information and can produce a higher quality image. Instead of applying
the video gamma correction, these cameras convert the linear code values
measured by the sensor into logarithmic code values which approximate a film
gamma, resulting in a wider dynamic range and greater shadow detail.
6) VIDEO VS. DATA, RAW VS. DE-BAYERED
Video vs. Data
As described above, video cameras were designed to produce an image that
displays correctly on a CRT monitor. To properly display on a CRT, a video
signal uses a gamma correction that is the inverse (reciprocal) of the CRT
gamma. The video format was also designed to limit the dynamic range of the
light values it records in order to fit into the narrower brightness range that can
be displayed by a CRT monitor. This conversion of light values into a video
signal by the camera, which involves applying the gamma correction and
limiting the dynamic range, reduces image quality and restricts its capacity to
portray shadow and highlight detail. Another characteristic of video is that it
records images in fields and frames, but stores them as clips—streaming clips
that encode all frames of a shot (head to tail) into a single file.
The highest quality video-recording format used in production today is
HDCam-SR. Originally developed for recording on videotape cassettes, the
HDCam-SR format can now also be recorded on dedicated hard drives or solid-
state media. Other video recording formats in use include DVCPro-HD and
HDCam.
More recently, digital data recording has been developed that records images
in a frame-based format in which each frame is saved as a discrete data image
file. This format allows direct access to any frame without the need to scroll
through a video clip file to locate a frame. The camera and recording system
selected determine the particular recording format used. Some file-based formats
in use include .dpx, .jpg, .tiff and OpenEXR.
The color space designated as the standard for Digital Cinema distribution is
X′Y′Z′, a gamma encoded version of the CIE XYZ space on which the
chromaticity diagram is based. A device-independent space, X′Y′Z′ has a larger
color gamut than other color spaces, going beyond the limits of human
perception. All physically realizable gamuts fit into this space.
10) ACES
The digital workflow of a production today involves the passing of an image,
with a particular Look attached, from facility to facility and system to system,
requiring conversions and transfers to different formats and media. The risk of
failing to achieve the intended result is significant—image quality (including
resolution, exposure and color data) can be lost along the way. Look
management and color management, discussed below, help reduce these risks,
but a larger framework for the entire workflow has been missing.
The Academy of Motion Picture Arts and Sciences’ Science and Technology
Council developed ACES—the Academy Color Encoding System. ACES
provides a framework for converting image data into a universal format. The
ACES format can be manipulated and output to any medium or device. The
system specifies the correct path from any camera or medium to ACES by using
a dedicated IDT (input device transform). Camera manufacturers such as Sony
and Arri have already generated an IDT for their camera systems. Display device
manufacturers of projectors and monitors provide an ODT (output device
transform) for their devices. Different ODTs are also needed for each distribution
medium, including digital cinema projection, Blu-ray, and HD broadcast. A final
color grading “trim pass” is still recommended for each master so as to insure
fidelity to the intended look.
The most striking gain for the cinematographer is the ability for ACES to
encode the full range of image information captured by any camera. Using a 16-
bit floating-point OpenEXR format, ACES has the potential of encoding the full
gamut of human vision, beyond what any camera can record today. It
accommodates a dynamic range of 25 stops, far past the capability of any device.
Highlights that were clipped and shadows that have been crushed using other
formats and workflows now can re-emerge with surprising clarity. ACES
empowers the cinematographer to use the full capability and range of the tools at
his or her disposal.
11) BIT DEPTH
Bit depth determines the number of levels available to describe the brightness
of a color. In a 1-bit system, there is only 0 and 1, black and white. An 8-bit
system has 256 steps, or numbers from 0–255 (255 shades of gray). Until
recently, 8-bit was standard for video and all monitors. Most monitors still have
only 8-bit drivers, but several HD video camera systems support 10-bit and 12-
bit signals through their image-processing pipelines.
A 10-bit system has 1024 steps, allowing more steps to portray subtle tones. A
linear representation of light values, however, would assign a disproportionate
number of steps to the highlight values—the top half of the full range (512–
1024) would define only one f-stop, while leaving 0–512 to define all the rest. A
logarithmic representation of code numbers, however, spreads equal
representation of brightness values across the full dynamic range of the medium.
In 10-bit log space, 90 code values are allocated for each f-stop. This allows for
more precision to define shadow detail. For this reason, 10-bit log is the standard
for recording digital images back to film. Some color publishing applications use
a 16-bit system for even more control and detail. The cost of the additional bits
is memory and disk space, bandwidth and processing time.
The consequence of too few bits can be artifacts, or flaws in the image
introduced by image processing. Artifacts include banding, where a smooth
gradient is interrupted by artificial lines, and quantization, where a region of an
image is distorted. If image data were recorded or scanned in 10-bit color, down-
converted to an 8-bit format for postprocessing, then up-converted back to 10-bit
for recording back to film, image information (and usually quality) will have
been lost and cannot be retrieved. Whenever possible, it is preferable to maintain
the highest quality of image data and not discard information through conversion
to a more limited format. Loss of image information can also result from
reduction in color space and gamut, color sampling, and resolution.
12) COLOR SAMPLING
Color sampling describes the precision of the measurement of light and color
by the camera system. It is represented by three numbers, separated by colons,
and refers to the relative frequency of measurement, or sampling. The notation
4:4:4 represents the maximum possible precision, in which all values are
measured at equal intervals.
In a video color space, where luminance and chrominance are differentiated,
the first number represents how often the luma (brightness) signal is sampled
(measured) on each line of sensors. The second number indicates how often the
color values (red-minus-luma and blue-minus-luma signals) are sampled for the
first line. The third tells how often the color values are sampled for the second
line. The two lines are differentiated to accommodate interlaced systems. (See
Figure 4.)
4:4:4 captures the most information, sampling color at the same frequency as
luminance. 4:2:2 is the current standard on HD production cameras. The color
information is sampled at half the frequency of the luminance information. The
color precision is lower, but is adequate in most production situations. (Human
vision is similarly more sensitive to brightness changes than it is to color
variation.) Problems can arise, however, in postprocessing, such as in the
compositing of greenscreen scenes, where color precision is important. It is
recommended that any visual-effects shots that require image processing be
recorded in a 4:4:4 format.
Film
Creating a 35mm film master involves conversion of the source master to a
digital format shaped to take best advantage of the capability of film.
Traditionally, the Cineon 10-bit log format has been used in film recorders, but
by using ACES, a higher quality output format could be used. The converted
digital files are uploaded into a film recorder—either laser or CRT-based—which
records the digital files onto 35mm film, exposing a new film master.
Digital Cinema
For the digital-cinema release, the DCI (Digital Cinema Initiatives—a joint
effort of the major studios) and the SMPTE have established a set of universal
format specifications for distribution—the DCDM and DCP. The specifications
define the elements (“assets”) used for digital-cinema display, and a structure by
which the assets should be used for successful exhibition. The assets include the
picture (Digital Source Master), soundtrack, subtitles, metadata and security
keys.
The DCDM (digital cinema distribution master) incorporates all specified
assets in an uncompressed format. The DCDM encodes the image in the device-
independent CIE X′Y′Z′ color space as TIFF files, with 12 bits per color channel.
This color space accommodates the entire human visual color spectrum and
gamut. It has two resolution formats: 2K (2048x1080) and 4K (4096x2160).
Source images of other sizes and aspect ratios can be stored within the 2K or 4K
image containers. The DCDM also accepts 3-D formats.
The DCP (digital cinema package) is a compressed, encrypted version of the
DCDM. Using JPEG 2000 compression, the DCP is used for the efficient and
safe transport of the motion-picture content. Upon arrival at the theater (or other
exhibition venue), the DCP is unpackaged, decrypted and decompressed for
projection.
16) ARCHIVING
Finally, all essential digital data should be properly archived for future
generations. Secure storage should be arranged for the digital source master code
(or ACES master data), and transferred periodically to fresh media to avoid any
degradation or corruption of data. Digital data is vulnerable on all currently
available media, whether it be magnetic tape, disk, hard drive or a solid-state
medium. The frequency of transfer is dictated by the conservatively estimated
life span of the medium being used. The only proven, long-term archive solution
is film—black-and-white C/M/Y separations. Properly stored, black-and-white
film negative can last at least 100 years.
CONCLUSION
With a working understanding of digital technology, the cinematographer can
confidently choose the best methods and devices suited to any project in
production. As new challenges arise and new technology becomes available, he
or she can better know what issues to consider in making the technical and
creative decisions that shape a career.
by Rob Hummel
editor of the 8th edition of the American Cinematographer Manual
DIGITAL MOTION PICTURE CAMERAS
S uffice it to say that any aspect ratio is achievable with the current digital
motion picture cameras available. Their capture format is generally that of
HDTV (1920x1080) or what is commonly called 2K.1 The aspect ratio of all but
one or two of these cameras is the HDTV aspect ratio of 1.78:1 (16 x 9 in video
parlance). This 1.78:1 aspect ratio is a result of the different camera
manufacturers leveraging what they have built for HDTV broadcasting cameras.
It’s unique to find a digital camera that doesn’t owe part of its design to
television cameras.
When composing for 1.85, digital cameras generally use almost the entire 1.78
imaging area. When composing for the 2.40:1 aspect ratio, most digital cameras
will capture a letterboxed 2.40 slice out of the center of the imaging sensor,
which, in 1920 x 1080 cameras, results in the pixel height of the image being
limited to only 800 lines (Star Wars Episode 3, Sin City, Superman Returns).
There is one digital camera that employs Anamorphic lenses to squeeze the
2.40 aspect ratio to fit within its sensor’s 1.97 aspect ratio, utilizing the entire
imaging area.
Yet another camera does creative things with the clocking of CCD pixels so
that the entire imaging area is still utilized when shooting a 2.40 image, with a
subtle compromise in overall resolution.
In the future, it is likely that more and more cameras will have imaging
sensors with 4K pixel resolution.
In a 1920 x 1080 Digital Camera, only the center 800 lines are used for a
“Scope” or 2.40:1 aspect ratio.
Figure 1. In a 1920 x 1080 Digital Camera, only the center
800 lines are used for a “Scope” or 2.40:1 aspect ratio.
RESOLUTION VS. MEGAPIXELS
When it comes to determining which digital camera to choose, don’t allow
claims of “megapixels” to influence your understanding of what a camera is
capable of. It is a term that is used loosely to indicate the resolution of a digital
camera that doesn’t follow any set guidelines. There are many factors that would
have to be taken into account if you were going to evaluate the merits of a digital
imaging device based on specifications alone.
The most straightforward way to understand a digital motion picture camera’s
capabilities is to shoot your own rigorous tests and evaluate them. In the same
manner when a new film stock has been introduced, the best way to truly
understand that emulsion’s capabilities is to test it, rather than rely on the claims
of the manufacturer.
Also, it will be less confusing if you focus your evaluations on the final image
delivered by a given digital camera. Claims about a camera’s imaging sensor can
be influenced by marketing. If we concentrate on the final processed image that
is delivered for projection, color correction, and final presentation, we will be
evaluating a camera’s true caliber.
SCANNER VS. CAMERAS
To clarify, at the risk of confusing, 2K and 4K film scanners generally capture
more information than their camera counterparts; a 2K film scanner will usually
have a CCD array of 2048 x 1556, while a 4K scanner will capture a 4096 x
3112 image. The lesson here is that one’s definition of the dimensions of 2K or
4K can vary; the terms 2K and 4K are only guidelines, your mileage may vary.
It is important to understand these variables in characteristics, and the need to
be very specific when describing the physical characteristics of film cameras,
digital cameras, scanners and telecines. In the world of film cameras (and film
scanners), 2K refers to an image which is 2048 pixels horizontally (perf to perf)
and 1556 pixels vertically. This image captures the area of either the SMPTE 59
Style C full aperture frame (.981″ x .735″) or the SMPTE 59 Style B sound
aperture frame (.866″ x .735″).
A 4K scan captures the same areas of the film frame as a 2K scan, but the
image captured is 4096 x 3112. With both 2K and 4K scanners, each individual
pixel contains a single unique sample of Red, Green and Blue.
In the digital camera world, 2K often refers to an image that is 1920 pixels
horizontally and 1080 pixels vertically. Again, each individual pixel contains a
single unique sample of Red, Green and Blue. This sampling of a unique Red,
Green and Blue value for each pixel in the image is what is called a “true RGB”
image, or in video parlance, a 4:4:4 image. While these cameras have an image
frame size that corresponds to HDTV standard, they provide a 4:4:4 image from
the sensor, which is not to HDTV standard; which is a good thing, as 4:4:4 will
yield a superior picture.
FILL FACTOR
There is one area of a digital camera’s specifications that is most helpful in
determining its sensitivity and dynamic range. This is the statistic that conveys
how much area of an imaging sensor is actually sensitive to, and captures light
vs. how much of a sensor is blind, relegated to the circuitry for transferring
image information. This is called the “fill factor.” It is also a statistic that is not
readily published by all camera manufacturers.
This is an area where not all digital cameras are created equal. The amount of
area a digital imaging sensor is actually sensitive to light (the “fill factor”) has a
direct correlation to image resolution and exposure latitude. With the currently
available professional Digital Motion Picture Cameras, you will find a range of
high profile cameras where less than 35% of the sensor’s total area is sensitive to
light, to cameras where more than 85% of the sensor is light sensitive.
As film cinematographers, we are used to working with a medium where it
was presumed that the entire area of a 35mm film frame is sensitive to light; in
digital parlance, that would be a fill factor of 100%. When a digital camera has a
fill factor of 40%, that means it is throwing away 60% of the image information
that is focused on the chip. Your instincts are correct if you think throwing away
60% of image information is a bad idea. With this statistic, you can quickly
compare camera capabilities, or at least understand their potential.
The higher the fill factor of a given sensor (closer to 100%), the lower the
noise floor will be (the digital equivalent of film grain) and the better the
dynamic range will be.
DIGITAL SENSORS AND AMOUNT OF LIGHT THEY
CAPTURE
Since the imaging sites on a solid state sensor are arrayed in a regular grid,
think of the 40% sensitive area as being “holes” in a steel plate. Thus, the image
gathered is basically similar to shooting with a film camera through a fine steel
mesh. You don’t actually see the individual steel gridlines of the mesh, but it
tends to have an affect on image clarity under most conditions.
In terms of sensor types with progressively more area sensitive to light, there
are basically two categories: photodiodes (less area) and photogates (more area).
Depending on the pixel size, cameras utilizing a single-chip photodiode interline
transfer array (either CCD or CMOS) would be on the low end with less than 40
to 35% of its total area sensitive to light, up to a theoretical maximum of 50%
for multichip (RGB) photodiode sensors. Next would be single-chip photogate
based sensors that can, again, depending on pixel size, have anywhere from 70 to
over 85% of its area sensitive to light.
In light sensitivity, as in exposure index, photodiode sensors will have a
higher sensitivity than photogate sensors, albeit with an associated trade off in
latitude and resolution.
In addition, solid state sensors tend to have various image processing circuits
to make up for things such as lack of blue sensitivity, etc., so it is important to
examine individual color channels under various lighting conditions, as well as
the final RGB image. Shortcomings in automatic gain controls, etc. may not
appear until digital postproduction processes (VFX, DI, etc.) begin to operate on
individual color channels.
MORE ON MEGAPIXELS
Much confusion could be avoided if we would define the term “pixel” to
mean the smallest unit area which yields a full color image value (e.g., RGB,
YUV, etc., or a full grayscale value in the case of black and white). That is, a
“pixel” is the smallest stand-alone unit of picture area that does not require any
information from another imaging unit. A “photosite” or “well” is defined to be
the smallest area that receives light and creates a measure of light at that point.
All current Digital Motion Picture cameras require information from multiple
“photosites” to create one RGB image value. In some cases three photosites for
one RGB value, in others, six photosites to create one RGB image value, a then
Bayer pattern devices that combine numerous photosites to create one RGB
value.
B efore the year 2000, video technology had only been used sporadically for
theatrical releases, mainly documentaries. For narrative fiction, milestones
include Rob Nilsson’s independent feature Signal 7 (1985), shot on 3⁄4″ video,
and Julia and Julia (1987), shot on 1125-line analog high-definition video.
However, using video technology for independent features—primarily as a
lower-cost alternative to film—didn’t catch on until a number of elements fell
into place: the introduction of digital video camcorders, desktop computer
nonlinear editing systems, and the increase in companies offering video-to-film
transfer work, all of which began to appear by the mid 1990s.
The major turning point came with the worldwide box office success of two
features shot on consumer camcorders, the Dogma’95 movie Festen (The
Celebration) (1998) and The Blair Witch Project (1999), proving that cinema
audiences were willing to watch movies shot in video if the subject matter was
compelling enough and the visual quality seemed to suit the content. However,
with 35mm film being the gold standard for narrative feature production, many
independent filmmakers have pushed manufacturers to develop affordable
technology that would bring an electronic image closer to the look of 35mm
photography.
24 fps progressive-scan (24P) digital video appeared in 2000 with the
introduction of the Sony HDW-F900 HDCAM pro camcorder; then in late 2002,
Panasonic released the AG-DVX100, a Mini-DV “prosumer” camcorder with a
24P mode that cost less than $5000. Not long after that, lower-cost high-
definition video cameras appeared, starting with the JVC GR-HD1 HDV
camcorder in late 2003. The next significant trend was the movement away from
traditional tape-based recording, allowing greater options in frame rates and
recording formats within the same camera. The first prosumer camera with this
design approach was the Panasonic AG-HVX200, released in late 2005; it came
with a standard Mini-DV VTR but could also record footage to P2 flash memory
cards.
By 2009, there were HD video cameras being sold on the market for less than
$1000, and now even phones and still cameras are capable of shooting HD
video. Today many movie theaters are being converted to digital projection; this
trend—combined with emerging online distribution schemes—has diminished
the need for a digital feature to be transferred to 35mm film (though there
remains good archival reasons for doing this).
The term “prosumer” vaguely covers a range of lower-cost video products
with a mix of consumer and professional features, often in a small-sized camera
body. Prosumer cameras, by definition, are not only used by consumers but by
professionals as well, either for cost reasons, or because their portability and low
profile are better suited to a particular type of production. In fact, some of the
cameras discussed in this article are actually made and sold by the professional
division of the manufacturer and thus are not technically prosumer products.
Therefore, rather than separate cameras into debatable categories of
“professional/prosumer/consumer,” I will make the cut-off point for discussion
any camera sold for under $15,000 and under that is capable of professional-
quality video, preferably with a 24P or 25P option.
In this article, “SD” refers to standard definition video and “HD” refers to
high definition video. “24P” refers to 24 fps progressive-scan video. “DSLR”
refers to single-lens reflex still cameras with a digital sensor. “NLE” refers to
nonlinear editing.
ASPECT RATIO ISSUES
Theatrical films are usually projected in 4-perf 35mm prints at 24 fps, either in
the matted widescreen format (using a projector mask to crop the image,
commonly to 1.85:1) or in the anamorphic widescreen format (the printed image
is approximately 1.20:1 with a 2X horizontal squeeze, stretched to nearly 2.40:1
with an anamorphic projector lens). Current standards for digital cinema
presentations follow the same conventions for aspect ratios.
In broadcast video, however, the standard aspect ratios are 4 x 3 (1.33:1) or 16
x 9 (1.78:1). Since most video cameras shoot in one or both formats, with 16 x 9
now becoming dominant, there are framing considerations for anything intended
for theatrical release where the 1.85 and 2.40 ratios prevail.
In digital terms, the NTSC picture area is 720 x 480 (or 486) pixels; PAL is
720 x 576 pixels. An HDTV picture is either 1280 x 720 pixels or 1920 x 1080
pixels.
If you do some simple math, you’ll realize that 720 x 480 equals a 1.50:1
aspect ratio, and 720 x 576 equals 1.25:1—neither is 1.33:1 (4 x 3). To confuse
things further, the 16x9 option in NTSC and PAL share the same pixel
dimensions as 4 x 3. The simple explanation is that the pixels in many recording
formats are not perfectly square. You can see this right away when you capture a
frame of SD video and display it on a computer monitor as a still image.
On a 16 x 9 monitor, a 16 x 9 SD recording displays correctly. But when
played on a 4 x 3 monitor, it fills the screen but looks stretched vertically
because of its nonsquare pixels, unless converted into a 4 x 3 signal with a 1.78
letterbox (a DVD player, for example, can be set-up to automatically do this but
many tape decks cannot). This is why 16 x 9 SD is often called “anamorphic,”
not to be confused with the photographic process of the same name. HD, having
a native 16 x 9 aspect ratio, does not use this anamorphic technique unless
downconverted to 16 x 9 SD.
4 x 3 has given way over time to 16 x 9 as HD has become more
commonplace in shooting, post, and distribution. Since HD cameras have 16 x 9
sensors, and the costs of these cameras have fallen to the same levels of SD
cameras, there is little reason to deal with 4 x 3 SD camera issues anymore, even
for SD distribution.
In terms of composing 16 x 9 for eventual 35mm 1.85 print projection, most
prosumer cameras do not offer 1.85 framelines for the viewfinder. However, 16
x 9 is so similar to the 1.85 ratio that simply composing shots with slightly more
headroom will be enough compensation for a later 1.85 crop. Many viewfinders
can be set up to display a slightly smaller “safe area” inside the full frame that
could serve as a rough guide for how much to protect the image. Also, one could
point the camera at a 1.85 framing chart and tape-off any production monitors to
match those framelines.
A recording of this framing chart would be useful in post as well.
If a film transfer is planned, it would be better to leave the final color-
corrected master in 16 x 9 full-frame rather than letterbox it to 1.85. The entire
16 x 9 image would then be transferred to film within the 35mm sound aperture
area (aka “Academy”) with a 1.78 hard matte. The black borders of the hard
matte would be just outside the projected 1.85 area. If you had transferred a 1.85
letterboxed image to film, then the 1.85 projector mask would have to be
precisely lined up with the hard matte in the print or else the audience would see
some of the matte on screen. A 1.78 hard matte allows some mild vertical
misalignment during 1.85 projection, but not enough to allow the image to be
clearly misframed by the projectionist. Also, since full-frame 16 x 9 and 1.85 are
similar, keep the entire 16 x 9 frame clear of any film equipment (like mics and
dolly tracks) when shooting. Odds are high that you will need to deliver a 16 x 9
full-frame version for HDTV broadcast, so protecting the entire frame will
eliminate the need for expensive reframing and retouching in post.
Some filmmakers wish to shoot 16 x 9 video for transfer to 35mm 2.40
anamorphic. The most common solution is to simply compose the image for
cropping to 2.40; for transfer to film, either a 2.40 letterboxed version or the 16 x
9 full-frame master can be used. You would instruct the transfer facility to crop
the recording to 2.40 and transfer to the 35mm anamorphic (“scope”) format.
The cropping and stretching are usually done by the film recorder. If you submit
a 16 x 9 full-frame recording, it is a good idea to add a framing leader showing
the 2.40 picture area within 16 x 9.
Figure 1. 1.85:1 format inside a 1.78:1 frame.
In terms of motion artifacts during playback, the normal 3:2 pulldown cadence
is designed to be as smooth as possible when viewing 24 fps material on 60i
display devices; the scheme is more effectively “buried” but therefore also
harder to extract in post. The advanced pulldown cadence is not as smooth but
allows for a cleaner extraction of the original 24P frames. As you can see in the
charts above, the original “C” frame is split between two different video frames
when using normal pulldown; this means that in order to recover this frame,
those video frames have to be decompressed and then recovered 24P frame
recompressed back into the original codec. This can mean that the “C” frame
will have suffered some possible degradation compared to the A, B, and D
frames. As you can see in the chart for the advanced pulldown scheme, every
third video frame is simply discarded in order to recover 24P in editing, and this
discarded frame was the only one where each field came from a different
original frame.
Due to its name, a number of people mistakenly believe that the advanced
pulldown is “better”—either more film-like or better-looking. It’s not. Its only
purpose, in fact, is to be removed in post. If your edited 24P project needs to be
converted back to 60i for broadcast applications, you’d then add a standard
pulldown scheme to a separate 60i master.
While some interlaced-scan cameras offer a simulated 24 fps effect, this is
really only intended to create film-like motion for 60i projects, not for transfer to
24 fps film. However, there are many consumer HD cameras now with
progressive-scan sensors capable of true 24P capture, even if some of them
record 24P to 60i with a pulldown. A number of these cameras are multi-format,
with different frame rate options, particularly the ones using solid-state recording
instead of videotape.
It is recommended that any pulldown be removed so that one can edit with the
original progressive frames. This allows greater flexibility and quality in post for
finishing the project to multiple deliverable formats. A progressive-scan master
is optimal for a transfer to film as well distribution on DVD, Blu-ray, and the
Internet. Otherwise, once you start editing a 24P-to-60i recording, you will be
breaking up the pulldown cadence, making it harder to remove later.
Since many of these cameras record 24P in different ways, it would be prudent
to find out what current editing software handles the particular camera and
recording method you will be employing. Updates to current programs are
constantly being released to accommodate these variations of 24P.
RECORDING FORMATS
This list is not all-inclusive because some formats and recorders fall outside
the budget range that this article covers.
DV
This term describes both a recording codec (compression/decompression
algorithm) and a tape format. Most lower-end SD cameras uses the DV25 codec,
a 5:1 DCT intraframe compression at a fixed bitrate of 25 Mbit/sec. Chroma
subsampling is 4:1:1 (NTSC) or 4:2:0 (PAL). The recorded video signal is 60i
(NTSC) or 50i (PAL). Mini-DV and DVCAM cassettes are commonly used; the
picture quality is the same for both tape formats. DVCAM uses a slightly wider
track pitch and faster speed to reduce drop-outs and increase physical robustness.
DVCPRO uses 4:1:1 worldwide but otherwise is still 25 Mbit/sec. DVCPRO50
is 50 Mbit/sec with 4:2:2 subsampling and roughly 3.3:1 compression.
HDV
This is an MPEG-2 recording format that uses both intraframe and interframe
compression to reduce HD down to 25 Mbit/sec for 1080i, 19 Mbit/sec for 720P.
Having the same bitrate, more or less, as DV, it can therefore use Mini-DV tapes
for recording and the IEEE 1394 (FireWire) interface. The aspect ratio is 16x9;
pixel dimensions are 1280 x 720 (720P) or 1440 x 1080 (1080i). The 720P
format supports 60P/30P or 50P/25P recording with a 24P option; the 1080i
format supports 60i or 50i recording with a 30P, 25P, and 24P option. Chroma
subsampling is 4:2:0.
“Pro-HD” is JVC’s name for their professional line of MPEG-2/HDV
cameras.
DVCPRO HD
This is an HD codec and tape format developed by Panasonic. It uses DCT
intraframe compression; the bitrate is 100 Mbit/sec when recording to tape.
Chroma subsampling is 4:2:2. Aspect ratio is 16 x 9; the pixel dimensions of the
recorded image are 960 x 720 (720P), 1280 x 1080 (1080/60i), or 1440 x 1080
(1080/50i).
AVCHD
This stands for Advanced Video Coding High Definition. This is an 8-bit
4:2:0 HD format that uses an MPEG-4 codec known as AVC/H.264. In prosumer
HD cameras, it is recorded to nontape media such as memory cards, hard drives,
and 8cm DVDs. It supports the same frame rates as HDV and DV. The bitrate is
between 12 to 24 Mbit/sec. It uses long-GOP (group of pictures) interframe
compression. The Panasonic name for their AVCHD pro camera line is
“AVCCAM”. Sony uses the name “NXCAM” for their line. AVCHD 2.0 allows
bitrates to 28 Mbit/sec for 1080/60P and for 3-D applications.
AVC-Intra
Developed by Panasonic, this variation of MPEG-4/H.264 HD offers
intraframe compression rather than interframe, at 10-bits using at higher bitrates,
either 50 Mb/s (4:2:0) or 100 Mb/s (4:2:2).
XDCAM
Uses different codecs and compression methods (MPEG-2, MPEG-4, DV25).
See above for DV recording specifications. The HD version of XDCAM uses
MPEG-2 compression at bitrates of 18, 25, or 35 Mbit/sec. Recorded pixel
resolution is 1440 x 1080. Chroma sub-sampling is 4:2:0. The Sony XDCAM
EX “HQ” mode allows full-raster 1280 x 720 or 1920 x 1080 4:2:0 recordings at
35 Mbit/sec. “HD 422” mode allows 8-bit 4:2:2 at 50 Mbit/sec.
Canon XF
Similar to XDCAM HD, allows 8-bit 4:2:2 recording using MPEG-2
compression at 50 Mbit/sec, as well as 4:2:0 at 35 or 25 Mbit/sec.
Redcode
This is the name of the wavelet compression (JPEG2000 variant) scheme used
by Red cameras to record RAW sensor data. Different bitrates are offered.
Optical Disk
SD and HD can be recorded using Sony’s professional XDCAM optical disk
system (similar to Blu-ray), but there are only some lower-end consumer DV
cameras with built-in optical disk recorders, using the miniDVD format.
External Recorders
There are portable recorders that can store larger amounts of video or data
than the camera can internally; some of these external devices are small enough
to be mounted directly to the camera. Some use an HDD (hard disk drive) or a
SSD (solid-state drive); others use memory cards for storage. Some examples:
The Atomos Ninja and Samurai recorders capture HDMI (Ninja) or HD-SDI
(Samurai) output to Apple ProRes on 2.5″ HDD/SSDs.
Sound Device’s PIX 220 and 240 recorders capture HDMI (220) and/or HD-
SDI (240) to Apple ProRes or DNxHD to CF cards or 2.5″ SSDs.
A laptop computer can also be used during shooting to process signals for
recording to an internal or external hard drive.
AVOIDING THE RECORDING CODEC
Some cameras can output a video signal before it gets processed and
compressed by the codec used in the internal recorder. For example, an HDV
camera often has an HDMI-out in order to send a picture to a consumer HDTV
monitor; this is usually an 8-bit 4:2:2 HD signal that could be recorded using a
computer (probably with a RAID array), an HDCAM-SR deck, or a data
recorder. You probably will need to use an HDMI-to-HD-SDI converter in order
to connect with these devices, though some devices accept HDMI directly. Some
prosumer cameras already have an HD-SDI output. Single-link HD-SDI is
limited to 1.5 Gbit/sec; an uncompressed 8-bit 4:2:2 1920 x 1080 59.94i signal is
around 1 Gbit/sec. What comes out of these HDMI or HD-SDI connections is
quite variable in quality, depending on the camera itself.
Though recording large amounts of uncompressed data at HD resolution for a
feature-length project (sometimes 20 to 60 hours of footage) is somewhat
inconvenient for many independent filmmakers, there may be reasons to do that
for limited amounts of footage—for example, when doing chroma key vfx
composites, or when mixing the footage with higher-quality formats. Again,
testing is recommended to see if the results are significantly better, enough to
warrant the difficulty or extra cost in using this approach. Over the years, it has
become easier to manage larger and larger amounts of camera data, which means
that filmmakers will be able to record hours of uncompressed HD without much
effort.
DEPTH OF FIELD AND SENSOR SIZE
Small prosumer video cameras often have 1⁄3″ sensors or even smaller (though
a recent trend has been towards larger sensors.) This means that, in order to
achieve the same field of view, the focal length used is much shorter than in
35mm cinematography—with a resulting increase in depth of field.
A 1⁄3″ sensor is 4.8mm wide; a Super-35 film frame is 24mm wide. That’s a
5X difference or conversion factor. Therefore you would need to use a 10mm
lens on a 1⁄3″ sensor camera to approximate the field of view of a 50mm lens on
a 35mm movie camera. Practically-speaking, this means that 1⁄3″ photography
has about five more stops of depth of field compared to 35mm cine photography,
once you match f-stop, distance, and field of view. The traditional solution to
reduce this major increase in depth of field has been to use the widest aperture of
the lens and then try to work in close-focus or at the telephoto end of the zoom
in order to throw the background into soft focus. However, in smaller shooting
spaces where wider-angle lenses need to be employed, this solution does not
reduce depth of field appreciably.
There are some 35mm lens adaptors on the market that project the lens image
onto a 35mm-sized groundglass screen, which is then rephotographed by the
prosumer camera, thus retaining the field of view and depth of field of these
lenses when used in 35mm photography. Since the groundglass screen has a
surface pattern and texture, the more advanced adaptors either vibrate or rotate
the groundglass while shooting, or use an extremely fine-grained material for the
groundglass. The simplest devices simply leave the texture of the groundglass
screen over the image. Since the image coming off of the 35mm lens is upside
down, some adaptors will also flip the image; otherwise, either the camera itself
has to be flipped upside down, or just the viewing monitors (leaving flipping the
recorded image until later in post). A few cameras have an image flip feature.
The first of these adaptors to hit the market was the P+S Technik Mini35,
followed by many variations like the Letus35, MOVIEtube, Redrock M2,
Cinemek Guerilla35, Agus35, Brevis35, etc.
In terms of larger sensors that come closer to 35mm in size, the Sony EX1 and
EX3 cameras use 1⁄2″ sensors (6.97mm wide). Most professional HD camcorders
used for ENG/EFP work have 2⁄3″ sensors (9.58mm wide). There is only an
effective 2.5-stop increase in depth of field with 2⁄3″ photography over 35mm
cine photography, which is easier to compensate for by shooting at wider lens
apertures—at least compared to 1⁄3″ photography. The Blackmagic Cinema
Camera uses a sensor that is almost 16mm wide. The Panasonic AG-FS100 uses
a Micro 4/3 sensor, which is about 80% as large as a 35mm motion picture frame
(the sensor is about 18mm wide versus 24mm wide). Finally, there are now
affordable models with 35mm-sized sensors in them—for example, the Canon
C300 and Sony FS700U, NEX-FS100, NEX-VG10, and PMW-F3 all use a
35mm-sized sensor.
There are also digital still cameras with 35mm-sized sensors that shoot HD
video. Some even have a “full-frame” (FF) 35mm sensor, 36mm x 24mm, which
is the same size as the 8-perf 35mm VistaVision frame. With FF35 cameras,
there is an effective 1.5-stop loss in depth of field over 35mm cine photography
once you match field of view, distance, and shooting stop.
Today, the shallower-focus look of 35mm photography is no longer restricted
to expensive professional cameras; however, keep in mind that with less depth of
field comes greater difficulty in follow-focusing during a scene—and good focus
is critical if the goal is projection on a large theatrical screen, or even on large
consumer HDTVs.
SHOOTING HD VIDEO ON A DIGITAL STILL CAMERA
Digital still cameras have had limited video capabilities for years now, but
recently, many have been able to shoot progressive-scan HD, either 720P or
1080P. This type of photography has been labeled “V-DSLR”, “HD-DSLR”,
“HDSLR” or “DSLR Video” in various online forums and publications (though
some of the still cameras being used are not actually DSLR’s but are mirrorless.)
Their small size, low cost, and high sensitivity have opened up new possibilities
in cinematography.
Canon EOS 1D C. Still camera (DSLR) with FF35 18MP CMOS sensor.
Capable of outputting 8-bit 4:2:2 Motion JPEG 4K video to a CF card at 24
fps (4K uses an APS-H crop of the sensor). Also records 8-bit 4:2:0 1080P
video at either 24P or 60P using full-frame of the sensor, with option to use a
S35 crop. Uncompressed 8-bit 4:2:2 signal output via the HDMI-out.
Canon XF305. 1⁄3″ 3-CMOS, 2.1MP full-res 1080P sensors. Records 8-bit
4:2:2 MPEG-2 (50 Mb/sec) to CF cards. Comes with Canon 4.1–738mm
zoom.
Ikonoskop A-Cam dII. 16mm CCD 1080P sensor. Sends uncompressed RAW
in CinemaDNG format to integrated SSD recorder.
Red Scarlet. 35mm (S35) CMOS sensor, 4K. Modular design; components
sold individually. Records 4K RAW using Redcode compression to SSDs.
Sony NEX-FS700U. 35mm (S35) CMOS sensor, 4K. Records 1080P using
AVCHD to SD card/MemoryStick or via the FMU (flash memory unit) port,
or it can output 8-bit 4:2:2 (with embedded timecode) via HDMI 1.4 or
3G/HD-SDI to an external recorder. High frame rates in short bursts (8 to 16
seconds) up to 240 fps at full resolution or 960 fps at reduced resolution.
Future option allowing 4K output to external recorder.
Sony XDCAM PMW-EX3. 1⁄2″ 3-CMOS, full-res 1080P sensors. Records 8-bit
4:2:0 MPEG-2 (XDCAM, up to 35 Mbit/sec) & DVCAM, to SxS
ExpressCards (two slots). 1–60 fps (720P); 1–30 fps (1080P). Time-lapse
capability. Uses 1⁄2″ EX Mount.
Canon EOS 5D Mark III. Still camera (DSLR) with a single FF35 22MP
CMOS sensor. Records to CF or SD cards.
Canon XF105. Single 1⁄3″ CMOS sensor. Records 8-bit 4:2:2 MPEG2 (50
Mbit/sec) to CF cards.
JVC GY-HM750U. 1⁄3″ 3-CCD sensors. Records 8-bit 4:2:0 MPEG2 (1080P,
1080i, 720P), up to 35 Mbit/sec, to SDHC cards (two slots). Optional SxS
card adaptor. 1–30 fps (1080P/i) or 1–60 fps (720P). Comes with detachable
Canon f/1.6 4.4–61.6mm zoom.
JVC GY-HM250U. 1⁄3″ 3-CCD sensors. Records 8-bit 4:2:0 MPEG2 (720P,
480P/i), up to 35 Mbit/sec, to HDV tape. Optional SxS card adaptor. 1–30 fps
(1080P/i) or 1–60 fps (720P). Comes with detachable Canon f/1.6 4.4–
61.6mm zoom.
JVC GY-HM100U. 1⁄4″ 3-CCD sensors. Records 8-bit 4:2:0 MPEG2 (1080P,
1080i, 720P), up to 35 Mbit/sec, to SDHC cards (two slots). 1–30 fps
(1080P/i) or 1–60 fps (720P). Fixed Fujinon f/1.8 3.7–37mm zoom.
Nikon D800. Still camera (DSLR) with FF35 36MP CMOS sensor. Records to
CF or SD cards. Has clean 8-bit 4:2:2 HDMI-out for recording to external
device.
Panasonic AG-AF100. Single Micro 4/3 CMOS sensor. Shoots 12–60 fps;
records 1080P, 1080i, and 720P using SD cards to MPEG-4 AVC/H.264. Has
HD-SDI out. Micro 4/3 lens mount.
Sony HDR-AX2000. 1⁄3″ 3-CMOS 720P sensor, records 1080/24P to 60i using
Sony Memory Stick or SD cards. MPEG-4 AVC/H.264 (up to 24 Mbit/sec).
Fixed f/1.6 4.1–82mm zoom.
Sony NEX-FS100U. Single APS-C 35mm CMOS sensor. Shoots 1–60 fps at
1080P. Records using Sony Memory Stick or SD cards to MPEG-4
AVC/H.264. Uses Sony E-mount lenses.
Canon Vixia HF S21. Single 1⁄2.6″ CMOS (3264 x 1840 pixels) sensor.
Records AVCHD to SD cards (two slots) or internal 64GB flash drive.
24P/30P/60i. Fixed f/1.8 6.4–64mm zoom.
Canon XA10. Single 1⁄3″ CMOS sensor. Records to internal 64GB flash drive
in 8-bit 4:2:0 MPEG-4-AVC H.264 (up to 17 Mbits/sec) Fixed f/1.8 4.25–
42.5mm zoom.
Panasonic Lumix DMC-GH2. Still camera (mirrorless) with single Micro 4/3
16MP CMOS sensor. Records AVCHD to SD/SDHC cards.
Sony NEX-VG10. Single APS-C 35mm sensor. Using Sony Memory Stick or
SD cards, records 30P to 1080/60i using MPEG-4-AVC H.264 (up to 24
Mbits/sec). No 24P. Sony E-mount lenses.
The author would like to thank Adam Wilt, Randy Wedick, Charles Crawford,
and Phil Rhodes for their technical advice.
What’s New
A new section on how to evaluate images effectively, and discusses the
importance of screen heights, complete with illustrations. (Page 60)
For motion pictures that will enjoy an IMAX release, you will find a page
briefly illustrating how that process works. (See Figure 11)
If you are considering digital cameras instead of film, after the discussion of
film formats and aspect ratios, you will find a new section with guidelines and
information to help you understand what to look for when examining a digital
camera. This should allow you to make an informed decision about what will
most benefit your motion picture; whether comparing various digital cameras
or comparing digital to film (see page 81).
When you are closer than seven screen heights away from your normal
standard definition television set, you can resolve the pixels and scanning lines
that make up the image. However, further than seven screen heights away, and it
is impossible to resolve any of the picture elements that make up the image;
basically, the image will appear as sharp as your eyes can resolve.
For example, if you were looking at an IMAX image and a standard definition
TV image side by side, but you were evaluating the images from 7 screen
heights away, both images would appear to have equal resolution. Once you got
closer than 7 to 6 screen heights, the TV image would begin to exhibit its poor
image quality, while the IMAX image would look exemplary at closer than 1⁄2 a
screen height. If you have been in an IMAX theater, take notice that the back
row of the theater is rarely much more than one screen height away from the
screen; something IMAX can do because of the dramatically high resolution of
IMAX photography.
At the risk of stating the obvious; the higher resolution the image, the closer
you can get to the image before noticing pixels (in the case of digital) or grain
(in the case of film).
In the case of HDTV, you have to get closer than three screen heights before
you can start to see the pixels in the image. In current stadium seat theaters, our
audiences are sitting no further than three screen heights from the images, and,
in most cases, closer than two screen heights. For this reason, it’s important we
evaluate imagery from the same screen distance as our audience.
In the image above, your proximity to the image affects your perception of the
image quality.
Most studios still have screening rooms that place you anywhere from six to
eight screen heights from the image. Therefore, when sitting that far from the
screen, you can fool yourself into thinking the image quality is adequate, perhaps
even superb. Yet, when viewed from a distance of 11⁄2 to 2 screen heights, your
conclusions may be entirely different. Please make sure when evaluating image
quality, you place yourself within 2 screen heights of the projected image. It is
important that critical evaluation of image quality be viewed from where the
audience will see the picture.
Also, the display medium must be taken into consideration as well. Many
people evaluate high-definition (HD) or 2K images on HD displays that can
resolve only 1400 of the 1920 horizontal pixels contained in an HD image.
Those 520 missing pixels of resolution can hide artifacts that will show up
clearly on a digital cinema projector, or a 35mm film out.
FILM FORMATS
History and a Definition of Terms
Currently, in the United States (and most of the world), the most prevalent
motion picture film formats, or aspect ratios, are 1.85 and 2.40 (2.40 is still often
referred to as 2.35).1 As a point of reference, these ratios are determined by
dividing the width of the picture by the height, which is why you will also see
them written as 1.85:1 or 2.40:1. Verbally, you will hear them referred to as “one
eight five” or “two four oh.” 2.40 is also referred to as anamorphic, or “Scope,”
referring to its roots in CinemaScope.
An examination of films over the past sixty years shows that format is not
automatically dictated by dramatic content. It is a creative choice, determined by
the cinematographer and director. The full range of drama, comedy, romance,
action, or science fiction can be found in both aspect ratios. The purpose here is
to advise on the pros and cons of both aspect ratios and the photographic
alternatives available to achieve them. This should help a filmmaker make an
informed decision as to which format is best for a given project. Most
importantly, you will be presented with the “conventional wisdom” arguments
for and against the formats; this conventional wisdom will either be endorsed or
countered with reality. This knowledge, in the end, will help you realize that,
creatively, there are no technical obstacles to choosing any format. However, you
will also understand the aesthetic impact those choices will have upon your
production.
As a clarification, the term full aperture refers to the entire image area
between the 35mm perforations, including the area normally reserved for the
soundtrack. In other literature you will find full aperture used interchangeably
with camera aperture, silent aperture and full silent aperture. All four terms
define the same area of the 35mm film frame.
In general, the term Academy aperture is used when referring to the imaging
area of the negative excluding the analog soundtrack area. More properly, it
would be referred to as the “sound aperture,” the term used to indicate the area
that remained when analog soundtracks were first added to 35mm film.
However, throughout this chapter, we will follow convention, and use the term
Academy aperture when referring to the imaging area excluding the soundtrack.
Academy aperture2 is an aspect ratio of 1.37:1, centered within the sound
aperture area, arrived at jointly by the American Society of Cinematographers
and Academy of Motion Picture Arts and Sciences in the early days of sound to
restore compositions closer to the 1.33 aspect ratio of silent films, and resolve
the unpleasant composition produced by framing images within the narrow
sound aperture.
All 1.85 composed films are achieved with “normal,” spherical lenses.
However, the 2.40 aspect ratio can be achieved two ways. One method is with
the use of anamorphic3 lenses that squeeze the image to fit within the Academy
aperture (see Figure 6). The alternate method (see Super 35 in Figures 7, 8, and
9) uses normal lenses without any distortion of the image, and then is later
squeezed for theatrical film release. Both methods will be discussed here.
The 1.85 and Super 35 formats can also be captured using cameras using a 3-
perf pull-down movement. While all 35mm illustrations in this chapter use a 4
perforation frame, were you to use a 3-perf camera for 1.85 or Super 35, the
advantages and disadvantages remain the same whether 3 or 4 perf.
3-perf camera movements do not provide adequate height for anamorphic
photography.
Also, the film formats discussed here deal with general 35mm motion picture
photography. Formats such as VistaVision and 65mm are most often used for
visual effects and special event cinematography and would require an chapter of
their own to discuss. However, Figures 10 and 11 illustrate how widescreen
compositions are presented when converted to IMAX, and how films
photographed in 65mm are released in 70mm.
At the end of this chapter, we’ll cover current methods for achieving 1.85 and
2.40 aspect ratios with currently available digital cameras.
COMPOSITION
Before getting into specifics about the different formats, I want to point out
the composition differences between the two aspect ratios 2.40 and 1.85,
regardless of how they are achieved photographically.
Figure 3 is an image of the Taj Mahal with a 2.40 aspect ratio outlined in
yellow.
Figure 4: Two 1.85 compositions.
In Figure 4, two 1.85 aspect ratios are outlined by yellow rectangles. The
larger of those two rectangles represents a 1.85 composition equal in its width to
the 2.40 aspect ratio of Figure 3. The smaller 1.85 rectangle is equal in height to
the 2.40 ratio of Figure 3.
The purpose here, is to illustrate that, depending on your framing, a 1.85
image has potential of encompassing as much width as a 2.40 composition.
Although 1.85 will take in the same width with greater height in the
composition, it’s important to realize that wide sets and vistas are not restricted
to the 2.40 format.
PROS AND CONS – A CLARIFICATION
While this chapter addresses the pros and cons of these formats, it can appear
contradictory upon reading when a disadvantage is countered with an opposite
perceived advantage, and vice versa. This is because, in fact, many pros and
cons are merely mythology about a certain shooting format, and are based on
nothing other than conventional wisdom, or outdated production practices, rather
than current practical experience.
I believe you will be able to sort the facts from opinions. That being said,
nothing here is intended to dissuade you from choosing a given film format.
Where there are possible true disadvantages to a format, it is presented so you
have your eyes wide open to any potential challenges or creative compromises.
We are working in a very artistic medium where there can truly be no absolute
rights and wrongs. What one person may find an objectionable grainy image,
another filmmaker may feel enhances the story being told.
Where possible, when listing outdated opinions, I immediately counter with
the present fact, or contrary point of view. With this presentation, I want you to
be able to draw your own conclusions and inform you with all the arguments and
points of view about these formats. If you study it carefully, you will be able to
effectively articulate your desire to photograph a film in the aspect ratio you feel
will most benefit the story.
I. THE 1.85 ASPECT RATIO
Photographed in Normal Academy Aperture Photography
Until the early 1990s, 1.85 was far and away the most common aspect ratio
for motion pictures filmed in the United States since the late 1950s. This trend
shifted a bit in the 1990s with the advent of more and more films composed for
the 2.40 aspect ratio. With the proliferation of films photographed in Super 35, in
recent years, less then one third of the top 100 motion pictures are composed in
1.85. Around the world the spherical aspect ratio most commonly used swung
between 1.85 and 1.66 depending on the country; however, in many countries
that used to photograph exclusively in the 1.66 aspect ratio, 1.85 has become
more common.
Figure 5 portrays how 1.85 films get from camera to screen. In the illustration,
the red box indicates the 1.85 composition area of the frame. In actual practice,
there is no indication on the exposed negative of the area composed for. The area
above and below that area contains picture information, but is masked out when
projected or eliminated entirely in the digital intermediate.
A. ADVANTAGES OF 1.85
1. Intimate Format: Many perceive 1.85 as more appropriate for pictures that
lend themselves to a more compact visual. Since close-ups virtually fill the
entire frame, it is often considered a more “intimate” format. However, it can
be argued that many “intimate” films have been composed for the 2.40
aspect ratio.
2. Interiors: If a film is largely interiors, 1.85 is often argued as the preferred
format, since they don’t involve the wide panoramic vistas associated with
2.40. Conversely, many do not weigh interiors or exteriors in their choice of
format.
Figure 5: Standard Academy 1.85:1
3. Depth of Field: Greater depth of field (the total area in focus at a given
distance). Since 1.85 uses shorter focal length lenses as compared with
Anamorphic, greater depth of field is available.
This advantage is often negated when lenses are shot “wide open,” resulting
in a little or no gain in depth of field.
4. Composition: Considered by many to be a “taller” composition. Lending
itself to compositions with more emphasis on the vertical than horizontal.
Cathedral interiors, or city skylines for example.
5. Wide Sets: An opinion often expressed is that sets don’t need to be as wide
on a 1.85 film as one photographed in 2.40, resulting in savings in set
construction. However, there are many that would argue film format has no
bearing on the width of set construction. As the examples in Figures 3 and 4
(page 64) point out, it’s possible for 1.85 to require as wide a set as 2.40,
depending on the composition.
6. Least Complex: 1.85 is the simplest format to execute from a
mechanical/technical standpoint. The choice of photographic equipment is
virtually unlimited, as any standard 35mm camera will accommodate this
format. Plus, you have a choice of very wide and “fisheye” lenses not
available in anamorphic.
7. Expendable Cameras: If a stunt camera mount is required that risks
destroying a camera lens, spherical lenses can be used that are much more
affordable than expensive anamorphic lenses.
8. Video Transfers: While they are effectively not manufactured any more,
many production companies, are still concerned about accommodating the
1.33 (4 x 3) aspect ratio of Standard Definition television. With some effort
on the shooting company’s part, the 1.85 composition can protect for
standard definition video so that a simple one-to-one transfer can be done
without panning and scanning. While left and right image integrity remain
virtually intact this way, there is an approximate 33% increase in the vertical
height of the composition.
Although many think it routine to protect the TV area from intruding objects
(e.g., lights, microphones, etc.), it makes the cinematographer and
soundman’s job more difficult, by not being able to bring lights and
microphones down close to the area of composition. This is why many
cinematographers shooting 1.85 will request to shoot with a 1.66:1 aspect
ratio hard matte. While the same width on the film, 1.66 is slightly taller than
1.85, closely approximating the height of the 1.33 (4 x 3) TV frame. This
gives the cameraman more freedom to light his subjects without fear of a
light or microphone showing up when transferred to video.
Yet, in a world where 1.78:1 (16 x 9) aspect ratio video displays are now the
norm, 1.85, for all intents and purposes, drops precisely into the HDTV
frame. While not precisely the same aspect ratio, the 1.78:1 HDTV frame is
only 42 pixels taller than the 1.85 aspect ratio. Meaning, if you did letterbox
a 1.85 image in HDTV, you would have two 21-pixel lines above and below
the image.
9. Sharper Lenses: Many people believe it is an advantage to shoot 1.85
because spherical lenses are sharper than anamorphic 2.40’s lenses. This is a
misconception. It is true that spherical lenses are often sharper than
anamorphic; however, the much greater negative area used with anamorphic
more than makes up for the subtle difference in resolution from spherical
lenses. Properly executed camera tests comparing the two formats always
reach this conclusion.
B. DISADVANTAGES OF 1.85
1. Negative Area: The principal disadvantage is the actual size of the 1.85
format on the negative. Because of the smaller area, 1.85 is noticeably
grainier than anamorphic 2.40. This is not as noticeable in the original
negative stage and projecting in small screening rooms, but becomes more
pronounced when projected in large theaters.
Graininess can be mitigated with digital grain reduction techniques in the DI;
however, if not applied carefully, such techniques can render the images
looking artificial, a loss of sharpness, and almost video like. When compared
to 1.85, anamorphic 2.40 uses 55% more area on the negative. This is not
insignificant.
2. Composition: Because of the greater height of the 1.85 aspect ratio, ceilings
of sets are more prone to being photographed. This can be a restriction on
how easily a cameraman can light an interior set (visible ceilings limit where
a cameraman can hang lights). On some sets, it may require additional
construction, and has been the experience on films shooting on studio back
lots. Sound can also be compromised by the microphone having to be farther
away.
3. Magnification: When projected, the area of the frame for 1.85 is subjected to
much greater magnification on a screen than an anamorphic frame, resulting
in more apparent grain in the image.
4. Jump and Weave: If your motion picture will be projected from a film print,
film projectors always have some level of jump and weave, ranging from
annoying to barely detectable. Because of 1.85’s vertical height on the film
frame, it is subjected to a 55% increase in visible jump and weave over an
anamorphic image projected in the same theatre. This artifact is eliminated
with digital projection.
5. 70mm: Not truly compatible with standard 70mm: Although it can be done,
there is a large amount of unused print on the sides when blown up to 70mm.
Also, because of the greater magnification in 1.85/70mm prints, grain is
much more apparent than in anamorphic blow-ups to 70mm. While not
generally an issue any more, this may be a choice for special events where a
film is to be projected on a particularly large screen (e.g., stadiums or
outdoor venues), and 70mm is chosen to mitigate jump/weave and gain more
light on the screen. Less of an issue, with the proliferation of digital
projection, but still an available option.
II. THE 2.40 ASPECT RATIO
Photographed with Anamorphic (Scope) Lenses
The following is a discussion of the 2.40 aspect ratio photographed with
anamorphic lenses. A discussion of Super 35 composed for 2.40 will follow.
Anamorphic 2.40:1 (also known as CinemaScope®, Panavision®,
Technovision®, or any other number of brand names), optically squeezes the
width of the image to fit within the 35mm full aperture. While this discussion is
about current state of the art 35mm anamorphic releases, we are now in an era of
digital delivery where use of the area reserved for analog soundtracks could be
employed again. For a film only presented digitally, a cinematographer could
choose to use the entire full aperture width of the frame and compose for a 2.55
or 2.66:1 aspect ratio as when CinemaScope was originally conceived.
Figure 6 portrays how anamorphic 2.40 images get from camera to screen.
Many myths persist from the early days of CinemaScope when lenses
distorted images, film speeds were slow, and sets required the use of arc lamps
and their associated lighting crews. With today’s sharper anamorphic lenses,
combined with higher speed film stocks and more manageable lighting
packages, the challenges of shooting anamorphically has all but disappeared.
This variant of Super 35 was originally conceived with the notion that it could
be a generic film format; shoot a movie with the option of releasing it in any
aspect ratio you want. The common topline was supposed to lessen the effect of
changing aspect ratios by maintaining the headroom and raising or lowering the
bottom of the frame. However, most cinematographers find that composing for
multiple formats neutralizes the effect of any one composition. In actual practice,
it has mainly been used to mitigate conversion of Super 35 wide screen films to
the 1.33 (4 x 3) aspect ratio of standard television. Yet, even in this application,
the change in composition from 2.40 to television’s 1.78 or 1.33 aspect ratios can
rarely be achieved automatically (close-ups become medium shots, etc.), and
usually requires the same care as an Anamorphic pan and scan.
Experience has shown, as most filmmakers will agree, just modifying a film’s
aspect ratio to fit within the video realm is a creative process. To assume that a
generic format will automatically deliver pleasing compositions no matter what
aspect ratio you choose will not hold up creatively.
That being said, there are several filmmakers that employ the common topline
format with great success and feel it allows them to create a version for those
venues still displaying a 4 x 3 image that is less objectionable than panning and
scanning. Those most successful at the transformation are careful to evaluate
each scene and camera angle on its own merits, and don’t attempt a generic
framing for an entire film.
Of course, the ideal is to view films in the composition they were originally
intended to be seen, without any alteration in the composition.
The only other technical note with regard to the common topline format, is
that scenes photographed with a zoom in common topline do not automatically
transfer over to another format by just lowering the bottom of the frame line.
Meaning, if a 1.33 extraction is performed, the zoom will still be centered
towards the top of the frame, where the 2.40 composition is centered, and a 1.33
composition will appear to drift upwards during a zoom. Any other aspect ratios
must be centered on that same point (1.33, 1.85, etc.) when a zoom takes place
within a shot. For these shots, common topline must be panned and scanned.
The rest of this discussion will only deal with Super 35 composed for a 2.40:1
aspect ratio, and apply to either common center or common topline versions.
Compositionally, Super 35’s 2.40:1 aspect ratio has the same compositional
benefits and constraints as anamorphic 2.40; here we discuss those qualities that
are unique to the Super 35 format.
More recently however, Arri, Red, and Sony have built cameras that have
unique designs optimized for motion picture cinematography.
Previously, the photo chemical process of film put all the burden of imaging
science on the film itself, allowing motion picture camera manufacturers to focus
their tasks, when building cameras, on the functionality of everything that was
required to get the image to the film; lenses, stable film transport, ergonomics,
etc.
With the world of digital imaging, there has been a paradigm shift. Now the
burden of the imaging science has been put on the camera manufacturers
themselves; something once left to the likes of Kodak or Fuji.
When composing for the 2.40:1 aspect ratio, most digital cameras will capture
a letterboxed 2.40 slice out of the center of the imaging sensor, which, in 1920 x
1080 cameras, results in the pixel height of the image being limited to only 800
lines.
Cameras with sensor aspect ratios approaching 2.0:1 or 1.33 sensors that can
use anamorphic lenses mitigate the compromise of clarity when composing for
2.40:1.
Yet another camera does creative things with the clocking of CCD pixels so
that the entire imaging area is still utilized when shooting a 2.40 image, with a
subtle compromise in overall resolution.
We are already witnessing more and more cameras with imaging sensors with
4K pixel resolution. So, hopefully, we’ll be back to movies having the highest
fidelity imagery again.
Fill Factor
There is one area of a digital camera’s specifications that is most helpful in
determining its sensitivity and dynamic range. This is the statistic that conveys
how much area of an imaging sensor is actually sensitive to, and captures light
vs. how much of a sensor is blind, relegated to the circuitry for transferring
image information. This is called the “fill factor.” It is also a statistic that is not
readily published by all camera manufacturers.
This is an area where not all digital cameras are created equal. The amount of
area a digital imaging sensor is actually sensitive to light (the “fill factor”) has a
direct correlation to image resolution and exposure latitude. With the currently
available professional digital motion picture cameras, you will find a range of
high profile cameras where less than 35% of the sensor’s total area is sensitive to
light, to cameras where more than 50% of the sensor is light sensitive.
As film cinematographers, we are used to working with a medium where it
was presumed that the entire area of a 35mm film frame is sensitive to light; in
digital parlance, that would be a fill factor of 100%. When a digital camera has a
fill factor of 40%, that means it is throwing away 60% of the image information
that is focused on the chip. Your instincts are correct if you think throwing away
60% of image information is a bad idea.
Since the imaging sites on a solid state sensor are arrayed in a regular grid,
think of the 40% sensitive area as being holes in a steel plate. Thus, the image
gathered is basically similar to shooting with a film camera through a fine steel
mesh. You don’t actually see the individual steel gridlines of the mesh, but it
tends to have an affect on image clarity under most conditions.
The fill factor is accurate only when it ignores lenslets, and only reflect the
size of the actual photosite well that captures photons vs. the rest of the imaging
area.
With this statistic, you can quickly compare camera capabilities, or at least
understand their potential.
The higher the fill factor of a given sensor (closer to 100%), the lower the
noise floor will be (the digital equivalent of film grain) and the better the
dynamic range will be.
In addition, solid state sensors tend to have various image processing circuits
to make up for things such as lack of blue sensitivity, etc., so it is important to
examine individual color channels under various lighting conditions, as well as
the final RGB image. Shortcomings in automatic gain controls, etc. may not
appear until digital postproduction processes (VFX, DI, etc.) begin to operate on
individual color channels.
Sound complicated? Perhaps, but all you need to understand is that product
claims can, and will, be misleading. We’ve lost our way a bit when thinking that
counting pixels alone is a way of quantifying a digital camera’s capability. A
return to photographing resolution charts, and actually examining what these
cameras are capable of will serve you much better in understanding how a given
camera will help you tell your story.
In short, do not be seduced by technical specification mumbo jumbo. Look at
images photographed by the camera in question, and evaluate from a proper
distance of screen heights from the image.
A 4K scan captures the same areas of the film frame as a 2K scan, but the
image yielded is usually 4096 x 3112. With both 2K and 4K scanners, each
individual pixel contains a single unique sample of each of red, green and blue.
In the digital camera world, 2K often refers to an image that is 1920 pixels
horizontally and 1080 pixels vertically. Again, each individual pixel contains a
single unique sample of each of red, green and blue. This sampling of a unique
red, green and blue value for each pixel in the image is what is called a “true
RGB” image, or in video parlance, a 4:4:4 image. While these cameras have an
image frame size that corresponds to HDTV standard, they provide a 4:4:4
image from the sensor, which is not to HDTV standard; which is a good thing, as
4:4:4 will yield a superior picture.
Pixels
Much confusion could be avoided if we would define the term “pixel” to
mean the smallest unit area which yields a full color image value (e.g., RGB,
YUV, etc., or a full grayscale value in the case of black and white). That is, a
“pixel” is the smallest stand-alone unit of picture area that does not require any
information from another imaging unit. A “photosite” or “well” is defined to be
the smallest area which receives light and creates a measure of light at that point.
All current digital motion picture cameras require information from multiple
“photosites” to create one RGB image value. In some cases three photosites for
one RGB value, in others, six photosites to create one RGB image value, a then
Bayer pattern devices that combine numerous photosites to create one RGB
value.
These definitions inevitably lead us to define the term “resolution” as the
number of pixels that yield a single full color or grayscale image value (RGB,
YUV, etc.).
The images above are just two of many examples of photo sensors used by
digital cameras. One illustrates a Bayer pattern array of photosites using a
photogate sensor, and the other an interline transfer array of photosites
employing a photodiode lenslet design.
© 2012 Robert C. Hummel III
Special Thanks
To stephen H. Burum, ASC, Daniel Rosen, Garrett Smith, Evans Wetmore, and
Anne Kemp Hummel in helping bring greater clarity to this discussion.
1. A historical note regarding 2.35 vs. 2.40 vs. 2.39. CinemaScope films with an analog soundtrack were
originally an aspect ratio of 2.35:1. In the early 1970s, the height of the anamorphic aspect ratio was
modified slightly by SMPTE to help hide splices, effectively changing the ratio to 2.40:1. Old habits die
hard and many still refer to the format as 2.35:1. Also, in 1995, SMPTE again made a change in the size of
the CinemaScope projection area to accommodate digital soundtracks. In both cases the math of the aspect
ratio yields an aspect ratio number of 2.39 and continues on several places to the right of the decimal point.
Cinematographers felt that rounding up to 2.40 ensured there would be less confusion with the 2.35 aspect
ratio. The actual difference between 2.39 and 2.40 is so inconsequential that, from a compositional
standpoint, they are the same.
2. If you actually calculate the aspect ratio of Academy aperture from SMPTE specs, the math has always
come out to 1.37:1. More often than not, people will refer to Academy aperture compositions as 1.33 (the
original aspect ratio of silent films), and almost never is the ratio called 1.37. The compositional difference
between the two is negligible.
3. Anamorphic comes from the word anamorphosis, meaning an image that appears distorted, yet under the
proper conditions will appear normal again. Leonardo da Vinci is credited with first using the technique
during the Renaissance.
4. When film resolutions are discussed, and the terms 2K or 4K are used, these refer to the number of lines
that can be resolved by the film. In the case of 2K, that would mean 2048 lines or 1024 line pairs as
photographed from a resolution chart. In the case of 4K, that would mean 4096 lines or 2048 line pairs
(4096 lines) as photographed from a resolution chart.
In digital imagery the term is applied a bit more loosely. While 2K and 4K still mean 2048 and 4096,
respectively, with digital scanning and photography it refers to a number of photo sites on the scanner or
camera image sensor. Numbers of pixels does not necessarily translate into actual image resolution. Add to
the confusion that many manufacturers have a curious habit of rounding up; which is how 1920 pixels gets
called “2K.”
Anamorphic Cinematography
John Hora is a member of the ASC Board of Governors. His many credits as a
cinematographer include the feature films The Howling, Twilight Zone: The
Movie, Gremlins and Honey, I Blew Up the Kid.
Exposure Meters
by Jim Branch
Gossen Color-Pro 3F
Type: Handheld digital 3-color meter for ambient and flash; determines
photographic color temperature of light sources and filtration required.
Light Sensor: 3 balanced silicon photodiodes for ambient and flash.
Measuring Range: 2000 to 40,000 degrees Kelvin.
Light Balancing Filters: -399 to 475 Mired Scale, switchable to corresponding
Kodak Wratten filters.
CC filter Values: 0 to 95 Magenta and 0 to 06 Green.
Power Source: 9V MN1604 or equivalent.
Dimensions: 5″ x 23⁄4″ x 1
Weight: Approximately 4.5 ounces.
Gossen Luna-Star F2
Type: Handheld exposure meter for measuring ambient and flash in both
incident and reflected light (with 5° Spot attachment).
Light Sensor: sbc photodiode, swivel head
Measuring Range: Ambient light (at ISO 100/21°): EV -2.5 to +18; Flash Light
(at ISO 100/21°) f/1.0 to f/90.
Measuring Angle in Reflected Mode: 30°
ISO Film Speeds: 3/6° to 8000/40°
Camera Cine Speeds: 8–64 fps, as well as 25 fps and 30 fps for TV.
Shutter Speeds: 1⁄8000 sec. to 60 min.
Flash Sync Speeds: 1 to 1⁄1000 sec., as well as 1⁄90 sec.
F-Stops: f/1.0 to f/90.
Power Source: 9V battery.
Dimensions: 23⁄4″ x 5″ x 1″
Weight: Approximately 4.5 ounces.
Gossen Luna-Pro F
Type: Handheld analog exposure meter for measuring ambient and flash light.
Light Sensor: sbc photodiode.
Measuring Range: Incident Light (at ISO 100/21°): EV -4 to +17; Flash Light
(at ISO 100/21°) f/0.7 to f/128.
ISO Film Speeds: 0.8/0° to 100,000/51°
Camera Cine Speeds: 4.5–144 fps.
Shutter Speeds: 1⁄4000 sec. to 8 hours.
Flash Sync Speeds: 1⁄60 sec.
F-Stops: f/0.7 to f/128.
Power Source: 9V battery.
Dimensions: 21⁄2″ x 45⁄8″ x 3⁄4″
Weight: Approximately 3.3 ounces.
Gossen Luna-Pro S
Type: Handheld analog exposure meter for measuring ambient sun and moon
incident and reflected light.
Light Sensor: Photoresistance (CdS)
Measuring Range: Incident Light (at ISO 100/21°): EV -4 to +17.
Measuring Angle in Reflected Light Mode: 30° (with Tele attachment).
ISO Film Speeds: 0.8 to 25,000
Shutter Speeds: 1⁄4000 sec. to 8 hours.
Flash Sync Speeds: 1⁄60 sec.
F-Stops: f/0.7 to f/128.
Power Source: Two 1.5V batteries.
Dimensions: 23⁄4″ x 41⁄3″ x 13⁄8″
Weight: Approximately 6 ounces.
Gossen Ultra-Spot 2
Type: Handheld Spot meter for measuring ambient and flash light.
Light Sensor: sbc photodiode.
Measuring Range: Ambient Light (at ISO 100/21°): EV -1 to +22; Flash Light
(at ISO 100/21°) f/2.8 to f/90.
Measuring Angle of Reflected Light: Viewfinder (15°), metering field (1°).
ISO Film Speeds: 1/1° to 80,000/50°
Camera Speeds: 8-64 fps, as well as 25 fps and 30 fps for TV.
Shutter Speeds: 1⁄8000 sec. to 60 min, as well as 1⁄90 sec.
Flash Sync Speeds: 1⁄8 sec. to 1⁄1000 sec., as well as 1⁄90 sec.
F-Stops: f/1.0 to f/90.
Power Source: 9V battery.
Dimensions: 31⁄2″ x 21⁄4″ x 71⁄2″
Weight: Approximately 12 ounces.
Minolta Cinemeter II
Type: Handheld digital/analog incident meter.
Light Sensor: Large area, blue enhanced silicon photo sensor. Swivel head, 270
degrees.
Measuring capability: Direct readout of photographic exposures in full f-stops
or fractional f-stops. Also measures illuminance level in footcandles and
Lux.
Measuring Range: Direct-reading multiple-range linear circuit incorporates a
high quality CMOS integrated amplifier whose bias current is compensated
against drift up to 70E° C.
Dynamic Range: 250,000 to one. Digital f-stop: f/0.5 to f/90 in 1⁄10 stop
increments. Analog f-stop: f/0.63 to f/36 in 1⁄3-stop increments. Photographic
illuminance: 0.20 to 6400 footcandles, 2 to 64,000 Lux.
Display: Vertical digital/analog bar graph that consists of 72 black liquid crystal
bars (6 bars per f-stop), that rise and fall depending on the light intensity.
The scale can be used in three different display modes (Bar, Floating Zone
and Dedicated Zone), and in three different measurement modes (f-stops,
footcandles and Lux).
Display Modes:
1. Bar mode is similar to a needle-reading meter, except that the movement is
up and down instead of left to right.
2. Floating Zone mode: a single flashing bar forms a solid bar that graphically
indicates the range of illumination in the scene. It can also be used for the
measurement of flickering or blinking sources.
3. Dedicated Zone mode is used to save up to five separate measurements.
Display Range:
ISO film speed: 12 to 2500 in 1⁄3-stop increments.
Camera speed: 2–375 fps
Shutter Angle: 45E° to 90E° in 1⁄9-stop increments, 90E° to 205E° in
1
⁄12-stop increments
Filter factors: 1⁄3-stop to 7 f-stops.
Resolution: Digital: 1⁄6 stop. Analog: 1⁄6 stop.
Accuracy: Digital 1⁄6 stop.
Additional Functions: Memory store and recall.
Lamp: Electroluminescent backlit liquid-crystal display.
Power consumption: Operating reading 5 mA with backlite on.
Power Source: One 9V battery.
Dimensions: 65⁄8″ x 3″ x 13⁄16″
Weight: Approximately 10 ounces.
Pentax Spotmeter V
Measuring Range: EV 1–20 (100 ASA).
Film Speeds: ASA 6-6400
Shutter Speeds: 1⁄4000 sec.-4 min.
F-Stops: f/1 to f/128.
EV Numbers: 1-192⁄3; IRE 1-10.
Measuring Angle: 1 degree.
Measuring Method: Spot measuring of reflected light; meter switches on when
button pressed; EV direct reading; IRE scale provided.
Exposure Read Out: LED digital display of EV numbers (100 ASA) and up to
2 dots (each of which equals 1⁄3 EV).
Photosensitive Cell: Silicon Photo Diode.
Power Source: 6V silver-oxide battery.
Sekonic L508C
Type: Handheld exposure meter for ambient and flash incorporating both
incident and spot meter reading ability.
Measuring Range: Incident Light: EV (-) 2 to EV 19.9 @ 100 ISO
Reflected Light: EV3 to EV 19.9 @ 100 ISO
Incident Reading Head: 270 Swivel Head
Measurement Modes: Footcandle 0.1 to 99,000, Lux 0.63 to 94,000, Foot-
lambert 3.4 to 98,000
ISO Film Speed: ISO 3 to ISO 8000 (1⁄3-stops)
Camera Speed: 1, 2, 3, 4, 6, 8, 12, 16, 18, 24, 25, 30, 32, 36, 40, 48, 60, 64, 72,
75, 90, 96, 120, 128, 150, 200, 240, 256, 300, 360, 375, 500, 625, 750, 1000
fps
Shutter Angle: 5 to 270 at 5 stops + 144 and 172 degrees.
Shutter Speeds: 30 min. to 1⁄8000 sec. (full, 1⁄2- or 1⁄3-stops)
F-Stop: f/1.0 to f/128.9 (full, 1⁄2- or 1⁄3-stops)
Accuracy: ±0.1 EV or less
Additional Functions: Digital f-stop and shutter speed readout in viewfinder;
Parallax-free rectangular 1–4 spot zoom; Retractable incident Lumisphere
for dome or flat disc readings; Auto Shut-Off 20 min.
Lamp: Electro-luminescent auto illumination at EV6 and under for 20 Sec.
Power Source: 1.5V AA battery (alkaline, manganese or lithium)
Dimensions: 3.3″ W x 6.1″ H x 1.6″ D (82mmW x 161mmH x 39mmD)
Weight: 81⁄2 ounces (240g)
Sekonic L608C
Type: Handheld exposure meter for ambient and flash incorporating both
incident and spot meter reading ability.
Light Sensor: Silicon photo diodes (incident and reflected)
Measuring Range: Incident Light: EV (-) 2 to EV 22.9 @ 100 ISO
Reflected Light: EV3 to EV 24.4 @ 100 ISO
Measurement Modes: Footcandle 0.12 to 180,000, Lux 0.63 to 190,000; Cd/m2
1.0 to 190,000; Foot-lambert 0.3 to 190,000
Display Mode: Digital f/0.5 to f/128.9 (in 1⁄3-stops); Analog f/0.5 to f/45 (in 1⁄3-
stops)
ISO Film Speed: ISO 3 to ISO 8000 (1⁄3-stops)
Camera Speed: 1, 2, 3, 4, 6, 8, 12, 16, 18, 24, 25, 30, 32, 36, 40, 48, 50, 60, 64,
72, 75, 90, 96, 100, 120, 125, 128, 150, 200, 240, 250, 256, 300, 360, 375,
500, 625, 750, 1000 fps
Shutter Angle: 5–270 at 5 stops + 144 and 172
Shutter Speeds: 30 min. to 1⁄8000 sec. (full, 1⁄2- or 1⁄3-stops)
F-Stops: f/0.5 to f/128.9 (full, 1⁄2- or 1⁄3-stops)
Filter Factors: 85, -n.3, -n.6, -n.9, -A3, -A6, -A9
Memory Function: 9 readings on analog scale (f/stop and shutter speed) with
memory recall and clear feature
Accuracy: ±0.1 EV or less
Additional Functions: Digital f-stop and shutter speed readout in viewfinder;
Parallax-free rectangular 1–4 spot zoom with digital display. Shutter speed
and aperture are displayed in viewfinder; Retractable incident Lumisphere
for dome or flat disc readings; Digital Radio Transmitter Module that
eliminates the need for an additional external transmitter at the meter’s
position; 12 Custom Function Settings for advanced preferences and
features.
Power Source: 3.0V (CR123A lithium battery)
Dimensions: 3.5″ W x 6.7″ H x 1.9″ D (90mmW x 170mmH x 48mmD)
Weight: 91⁄2 ounces (268g)
Sekonic L-308BII
Measuring System: Incident or reflected for flash and ambient light; Silicon
photo diode.
Measuring Modes: Ambient and flash (cord, cordless) – incident and reflected
(40 degrees)
Receptor Head: Nonrotating, noninterchangeable.
Aperture/Shutter Priority: Shutter speed priority.
Display Read-out: Digital LCD
ISO Range: ISO 3 to 8000 in 1⁄3-stop increments.
F-Stops: f/0.5–f/90 9/10
Shutter Speeds: Ambient: 60 sec.–1⁄8000 sec.; Flash: 1 sec.–1⁄500 sec.
EV Range: (ISO-100) EV(-) 5 to EV 26.2
Camera Speeds: 8–28 fps.
Power Source: 1.5V AA battery.
Dimensions: 4.3″ x 2.5″ x .9″ (110 x 63 x 22mm) WDH
Weight: 2.8 ounces (80 g) without battery.
Sekonic L-398M
Measuring System: Incident light type, reflected light measurement is also
possible
Measuring Modes: Ambient incident and reflected
Receptor Head: Rotating, interchangeable receptor.
Display Readout: Indicator needle
ISO Range: 6 to 12,000; Measuring Range: EV4-EV17 (for incident light)
EV9-EV17 (for reflected light)
F-Stops: f/0.7–f/128
Shutter Speeds: Ambient: 1⁄8000 to 60 sec.; Flash: None.
EV Range: (ISO-100) EV 4 to 17
Camera Speeds: 8–128 fps
Power Source: Selenium photocell (no battery needed)
Dimensions: 4.4″ x 2.3″ x 1.3″ (112 x 58 x 34mm) WDH
Weight: 6.7 ounces (190 g)
Sekonic L-358
Measuring System: Incident: Dual retractable lumisphere, Reflected: with
included reflected light attachment; Silicon photo diodes
Measuring Modes: Ambient and flash (cord, cordless, multi flash)—incident
and reflected (54 degrees).
Receptor Head: Rotating 270 degree with built-in retractable lumisphere.
Aperture/Shutter Priority: Aperture and shutter priority
Display Readout: Digital LCD plus LCD analog, (auto-backlit LCD at EV 3
and under for 20 sec.)
ISO Range: Dual ISO settings: 3 to 8000 (1⁄3-steps)
F-Stops: f/1.0 to f/90.9 (full, 1⁄2- or 1⁄3-steps)
Shutter Speeds: Ambient: 1⁄8000 sec. to 30 min.; Flash: 1⁄1000 sec to 30 min.
EV Range: (ISO-100) EV -2 to 22.9
Camera Speeds: 2–360fps
Exposure Memory: Capable of nine exposure measurement readings
Shadow/Highlight Calculation: Yes
Brightness Difference: Displays the difference in 1⁄10-stop increments
Flash To Ambient Ratio: Yes
Multiple Flash: Yes, unlimited
Exposure Calibration: ±1.0 EV
Power Source: One CR123A lithium battery
Dimensions: 2.4″ x 6.1″ x 1.46″ (60 x 155 x 37mm) WHD
Weight: 5.4 oz (154 g)
Sekonic L-558
Measuring System: Dual function retractable incident lumisphere; 1° spot
viewfinder; Two silicon photo diodes (SPD).
Measuring Modes: Ambient and flash (cord, cordless, multi-flash) – incident
and spot (1°).
Metering Range: Ambient Incident Light: EV -2 to EV 22.9; Reflected Light:
EV 1 to EV 24.4; Flash Incident Light: f/0.5 to f/161.2; Reflected Light:
f/2.0 to f/161.2
Receptor Head: Rotating 270 degrees; with built-in retractable lumisphere.
Aperture/Shutter Priority: Aperture, shutter priority and EV metering value
Display Readout: Digital LCD plus LCD analog, (Auto-backlit LCD at EV 6 or
under for 20 seconds)
ISO Range: 3 to 8000 (in 1⁄3-stop steps)
F-Stops: f/0.5–f/128.9 (full, 1⁄2- or 1⁄3-stops); Under and Overexposure
indication.
Shutter Speeds: Ambient: 30 min. to 1⁄8000 sec. (full, 1⁄2- or 1⁄3-stops, plus 1⁄200
and 1⁄400); Flash: 30 sec. to 1⁄1000 sec. (Full, 1⁄2- or 1⁄3- stops; Special flash
speeds: 1⁄75, 1⁄80, 1⁄90, 1⁄100, 1⁄200, 1⁄400)
EV Range: (ISO-100) EV -9.9 to 46.6 (in 1⁄10-stops).
Camera Speeds: 2–360 fps, (fps at a 180°).
Exposure Memory: Up to nine readings on analog scale with memory recall
and clear feature.
Shadow/Highlight Calculation: Yes
Brightness Difference: ±9.9EV (in 1⁄10-stops) flash or ambient light evaluation.
Flash To Ambient Ratio: Yes; Displays percentage of flash in total exposure in
10% increments.
Multiple Flash: Unlimited readings.
Exposure Calibration: ±1.0EV for incident and reflected independently (in 1⁄10-
stops); Exposure Compensation ±9.9EV for incident and reflected
independently (in 1⁄10-stops); Filter compensation ±5.0EV for incident and
reflected independently (in 1⁄10-stops).
Power Source: Lithium type, One CR123A; Auto “shut-off” after 20 minutes;
Battery power displayed with a symbol in three status levels.
Dimensions: 3.5″ x 6.7″ x 1.9″ (90 x 170 x 48mm) WDH
Weight: 91⁄2 ounces (268 g)
Spectra Professional IV
Type: Handheld exposure meter for measuring incident and reflected light.
Light Sensor: Silicon Photovoltaic cell, computer-selected glass filters tailored
to spectral response of the film. Swivel head, 270 degrees.
Measuring Capability: Direct readout of photographic exposures. Also
measures illuminance level in foot-candles and Lux.
Measuring Range: One million to one (20 f-stops) direct-reading multiple-
range linear circuit controlled by microcomputer.
Display Range: ISO film speed: 3 to 8000 in 1⁄3-stop increments.
Camera Speeds: 2–360 fps.
Resolution: Digital: 0.1 f-stop. Analog: 0.2 f-stops.
Accuracy: Digital: 0.05 f-stop.
Additional Functions: Memory store and recall.
Lamp: Optional electroluminescent lamp for backlit liquid-crystal display.
Power Consumption: Operating (reading) 5mA. Data retention 5uA.
Power Source: 6V battery (A544, PX28L or PX28).
Estimated Battery Life: Approximately 1 year with normal use.
Dimensions: 51⁄2″ x 21⁄2″ x 2.
Weight: Approximately 6 ounces.
To answer the remaining lens question, “Where and how does it fit into a cine
camera system?,” it is first necessary to describe a couple of aspects about
modern vs. old camera systems and how they, together with changing lens
technology (both design and manufacture), have influenced the progressive
design of cine lenses.
In the “old days,” say, the first half of the twentieth century, all lens designs,
including cine ones, had to be kept simple, employing up to five lens elements or
five doublet components (i.e., two elements cemented together). This was simply
due to the fact that anti-reflection coatings did not exist, thus causing
tremendous light loss through a lens. For example, a 50mm f/2.8 focal-length
lens containing five single lens elements and 10 refractive surfaces would
typically experience a 5% loss per surface (i.e., 95% transmission or 0.95
normalized transmission), meaning that for 10 surfaces the overall transmission
might be 60% (i.e., 0.9510). Since T-stop = f-stop ÷ √ (normalized transmission),
this 50mm f/2.8 lens would have a T-stop = 3.6. A f/2.8 lens working at T3.6
does not sound too bad, but consider a 10 or even 20 element f/2.8 lens with
corresponding T-stops of 4.7 and 7.8. Fortunately, cine lenses of this era had one
major advantage over later lenses: the film cameras they were attached to were
predominantly nonreflex. Therefore, the lens could be placed quite close to the
film, which made the lens-design task easier and the lenses less complicated. In
fact, it is interesting to note that in the case of old wide-angle, short focal-length
lenses, their back focal length was normally smaller than their focal length,
which would make them incompatible with modern reflex cameras. However, all
of the light that is lost has to go somewhere, and even in the best lens designs,
some of it would, by successive lens-element surface reflections, head toward
the film, causing ghosting and/or veiling glare. To aggravate matters even more,
these slow lenses of T3.6–T5.6 full aperture, coupled with the insensitivity of
film stock, say ASA 50, meant that huge amounts of light were required to
illuminate a scene—good for the lighting supplier but trouble for the
cinematographer, especially in terms of ghosting and veiling glare. Still, the
cinematographer benefitted from one great, indeed overwhelming advantage—a
larger depth of field than he/she is accustomed to now. So these early cine lenses
got close to the film, were necessarily simple in construction (no coatings), and
due to their lack of speed (aperture) performed well (because of good aberration
correction at their full aperture, albeit with careful lighting). A sampling of these
old lens forms is depicted in Figure 3, which includes their well-known technical
or inventor names.
Of course, modern cine cameras are virtually all reflex because of their need
to provide continuous line-of-sight, through-the-lens viewing to the camera
operator. What this means for the lens is that its rear element must be located
some distance in front of the film as predicated by the reflex mirror design of the
camera. Fortunately, by the 1950s the previously discussed transmission problem
had been remedied by the introduction of thin-film technology that ushered in
anti-reflection coatings. More complex lens configurations, containing anywhere
from ten to twenty elements, were now considered practical, and the fixed focal-
length lens (or prime) suddenly had a partner—the zoom lens. Both lens types
still had to deal with a large back focal-length distance, but this was now easily
managed because complex lens arrangements were feasible. Even those
troublesome wide-angle lenses, now sitting at a film distance mostly exceeding
their focal lengths, could be relatively easily constructed.
Even though the post-1950s cine lenses were substantially better than their
predecessors, they had one additional demand—faster speed, i.e., greater
aperture. Although film-stock sensitivity had gradually improved, low-light
filming situations had increased, thus requiring cine lenses of full aperture T1.3-
T1.4 and sometimes T1.0 or less. Fortunately, or perhaps with good timing due
to demand, glass technology started to improve substantially in the 1960s. The
first major effect on cine lenses was the realization that those fast-aperture lenses
were now possible due to high refractive index glasses. However, aberration
correction was still limited, especially at T1.3-T1.9 apertures. By the early
1980s, glass technology had improved so much that aberration correction, even
in lenses of T1.9 full aperture and, to a lesser extent, T1.3, was approaching the
maximum theoretical limit, even after allowing for all other lens design
constraints such as length, diameter, weight, cost, etc.
Figure 3. Basic “old” lens forms
Before getting into format and lens specifics, it should be mentioned that
detailed information about format image size, area, etc., can be found elsewhere
in this manual (see Cinematographic Systems chapter). Also, to discuss the
effect of specific formats on lenses, it is necessary to explain some elementary
theory about film formats and lenses. Referring to Figure 5a, it can be seen that
if the same focal-length lens, set at a constant aperture, is used in three widely
differing image format diagonals, then the fields of view are entirely different.
Now, let’s say the focal lengths of the lenses (still at a constant aperture) are
selected for constant fields of view as shown in Figure 5b. Then, upon projection
of each image (after processing to a print) on a constant-size viewing screen, it
would be apparent that the in-focus objects would look the same. However, for
out-of-focus objects it would be clearly apparent that the depths of field are quite
different. This result is extremely important to the cinematographer, not only
because of the artistic impact, but also because apart from changing the lens
focal length and hence field of view and perspective, nothing can be done to the
lens design to alter this result. The optical term “Lagrange invariant” (an
unalterable law of physics), has been defined (see Figure 1), and the
aforementioned result is a direct consequence of it. In Figure 5b, its controlling
effect on field of view (perspective), focal length and depth of field vs. image
format size are self-evident. Only one real option is available to the
cinematographer to alleviate this condition or even solve it—change the lens
aperture. This seems quite simple until the practicalities of actual shooting are
fully considered. How can you shoot in the 65mm format or, for that matter, the
16mm format and achieve the same look as for the 35mm format? Some
remedies (not cures) can be implemented, and they are best understood by taking
examples from old and new 65mm-format feature films. It should be understood
that because the 65mm format intrinsically has less depth of field than the 35mm
format for lenses of equivalent field of view, an abundant use of lighting
combined with stopping down the lens enables a similar image to be realized
(see Lawrence of Arabia, Dr. Zhivago and Ryan’s Daughter all shot by Freddie
Young BSC). Also, diffusion filters can help to increase the apparent depth of
field, albeit with some loss of image sharpness. Another option to help the
65mm-format depth of field in certain scenes are slant-focus lenses (see the bar-
top bar scene in Far and Away, shot by Mikael Salomon, ASC). In comparison,
for the 16mm format the greater depth of field is more difficult to correct since
this implies even faster lenses, which are not available because there is a
minimum f-stop that cannot be gone below, f/0.5. Therefore, the only real
solution for the 16mm format is to forego the preferred field of view and
corresponding perspective by changing the focal length of the lens or working at
lesser object distances. Using hard lighting is another approach that helps
somewhat with 16mm format depth of field, but the overall look may suffer.
Electronic enhancement in postproduction is another possibility, but again, the
overall look may suffer.
Figure 5. Formats and lenses.
To conclude, it is fair to say that for 35mm and 65mm film formats, just about
anything can be successfully shot as long as one is willing to accept the costs
involved for, say, lighting. For smaller formats, such as 16mm film or high-
definition video cameras (with 2⁄3-inch detectors), the main limitation is too
much depth of field in low-light-level situations where the lens aperture cannot
realistically and practically be less than T1.2–T1.5. Only faster lenses or a retreat
to larger formats, be they film or electronic, will completely solve the depth of
field issue. Of course, what is or is not deemed acceptable in terms of the depth
of field or look of the picture has so far been determined by what is expected. In
other words, it is highly influenced by past results. Future results, especially with
digital-video cameras and lenses, might look different, and over time might
become quite acceptable. So maybe the depth of field concerns will disappear. In
the meantime, the Lagrange invariant, just like Einstein’s theory of relativity,
cannot be broken, so lens depth of field, perspective and look is inextricably
linked to and governed by the format size.
Anamorphic vs. spherical depth of field will be covered later in this chapter.
Also, the deliberate omission of the circle of confusion in the preceding
discussion about depth of field is because it has no bearing on different film
formats that have similar resolution capabilities, especially when viewed
eventually on a cinema screen. The circle of confusion is a purely mathematical
value used to determine an estimate of expected or effective or apparent depth of
field, but that’s all, and it should only be used for that purpose.
SPHERICAL VS. ANAMORPHIC LENSES
Here we will discuss some of the differences between spherical and
anamorphic lenses. There will be no attempt to suggest or imply which of these
formats is better. Only lenses for the 35mm cine format will be described, but the
same observations apply to other formats. Unlike the Super 35 pseudo-
anamorphic format, both the true spherical (1.85:1) and anamorphic (2.40:1)
35mm formats require no anamorphic opticals (i.e., special printer lenses) in the
process of film negative to release print. Since the late 1950s, the anamorphic
film format has been about 59% greater in negative film area than the spherical
1.85:1 film format. An often-asked question is, what happened to the original
CinemaScope anamorphic lenses? Interestingly, the word “scope” has survived
to this day, even though the terms spherical (i.e., flat) and anamorphic (i.e.,
widescreen) are best suited to describe the format difference. There are many
reasons, mostly economic or business-related, as to why CinemaScope lenses
disappeared by the mid-1960s. Some aspects of the early lenses did have
technical deficiencies, and these are worth expanding upon.
Early anamorphic lenses produced one particularly disconcerting, focus-
related image characteristic, which caused several problems with both actors
(especially actresses) and the camera crew. The problem was “anamorphic
mumps,” a well-known term coined by movie production people. A good
example of this is to consider an actress (speaking her lines) walking from, say,
20 feet to 5 feet (i.e., full-body to facial shot) while the camera assistant or focus
puller does a follow focus to keep her in focus at all times. Assuming a high-
quality anamorphic prime lens was used (CinemaScope circa 1955-1965), the
face of the actress would naturally increase in size as she approaches the camera.
However, due to lens breathing through focus (explained in detail later), and
more specifically anamorphic lens breathing, not only did the face of the actress
increase in size, but also the face would become fatter at the close focus. So the
breathing effect, or increase in size of the object, is much greater in the
horizontal direction as compared to vertical direction. Obviously, actors were
unhappy about this phenomenon, so early anamorphic pictures had close-up
shots at 10 feet instead of 5 feet (to alleviate the effect). Production companies
and camera crews, particularly the cinematographer, did not like this, because
with the old, slow film stocks, a ton (for lack of a better word) of lighting was
required, and the perspective of the shot was not as it should be. Heat from the
vast lighting required also produced problems, like sweat on the actors’ faces and
makeup meltdown. In 1958, a U.S. patent was issued for an anamorphic lens
design that virtually eliminated this problem, and anamorphic lenses utilizing the
patented invention have been used continuously for more than forty years. They
are the predominant anamorphic lenses used to shoot the majority of widescreen
movies to this day. The importance of these new anamorphic lenses was
exemplified by the fact that Frank Sinatra the main actor, in the movie Von
Ryan’s Express, shot by William H. Daniels, ASC, demanded that these lenses be
used. Before leaving this subject, an interesting piece of historical information:
The first prototype anamorphic prime lenses with reduced “anamorphic mumps”
were used in the 65mm-format film (2.75:1 with 1.25x anamorphic squeeze) Ben
Hur, released in 1959 by MGM and shot by Robert Surtees, ASC. A little-known
anecdotal fact about these anti-anamorphic mumps lenses is that they can be
specially designed to squeeze, or thin, an actor’s face in close-ups; indeed, this
was soon requested by a particular actress (who shall remain nameless) whose
face had fattened with age.
An often-asked question about anamorphic vs. spherical lenses is, “What is
the depth of field for each?” For a spherical lens, the answer is simple: look up
published depth of field tables (provided in this manual; see page XXX). For an
anamorphic lens, the answer is complicated; firstly, there usually are not any
published depth of field tables, and, secondly, the depth of field is different
depending on whether it is measured in the vertical or horizontal directions of
the film format. What does this mean in reality? Taking an example of, say, two
50mm focal-length lenses, one spherical and one anamorphic, both set at the
same aperture, two anamorphic scene compositions can be created, whereas only
one is available spherically. Rather than dissect all possible compositions of a
scene, a good rule of thumb is to allow for half the depth of field when
comparing anamorphic to spherical lenses of equal vertical or horizontal fields
of view and aperture. Although this covers a worst-case scenario of fitting an
anamorphic scene horizontally into a spherical-format width, it does provide a
good safety margin for all filming scenarios.
In terms of absolute or theoretical image quality and overall aberration
correction, there is no doubt that spherical lenses are capable of superior
performance over anamorphic lenses. However, in terms of practical image
quality or what will be eventually viewed on the screen or on smaller
presentation mediums (e.g., television), both easily provide adequate
performance. For theatrical presentation on a fixed-width cinema screen, the
anamorphic format will have a distinct advantage over the spherical format
because there is more depth of focus at the film print in the projector, which
means that constancy of film print position in the projector is less critical.
Another consideration relating to image quality or residual aberration correction
differences in spherical as opposed to anamorphic lenses is integration of visual
effects (optical and computer-generated). Part of the reason why the Super 35
format (spherical-lens origination, anamorphic release) has recently become
popular, even though the film negative is small—65% smaller than pure
anamorphic 2.40:1—is because the aforementioned low residual aberrations may
aid the effects community. Considering various aspects of spherical vs.
anamorphic-lens residual aberrations, the latter lens tends to produce more field
curvature, astigmatism and distortion. Because of this, the spherical lens has a
sweet spot (i.e., excellent image quality) with a diameter roughly equal to 90%
of the width of the format. In comparison, the anamorphic lens has an elliptical
sweet-spot area bounded vertically within 80% of the format and bounded
horizontally within 90% of the format (see Figure 6a). What this means
practically for the cinematographer is that objects of principal importance are
best kept within these sweet spots (at lens apertures approaching full aperture). It
should also be noted that all lenses, spherical and anamorphic, tend to perform
best beginning at an aperture stopped down by at least one from their maximum
aperture opening and up to, say, an aperture of T11 to T32 depending on lens
focal length (i.e., T11 for very short focal-length lenses, T32 for very long focal-
length lenses).
Figure 6. Color sampling as a compression of image data
The Cine Lens list, starting on page 653 in this manual, contains some of the
best-known specialty lenses and systems and identifies their specific
characteristics and possible applications. Some of them are dependent on folded
optical configurations utilizing mirrors or prisms. They are all unique, but some
have overlapping properties. None of them can be construed as front or rear lens
attachments, because they attach directly to the camera.
By far the most significant aspect of these lenses and optical systems is their
ability to achieve in-camera real-time shots not possible with regular primes and
zoom lenses. Other advantages include provision of large depth of field, extreme
close or even macro focusing, and maneuvering among objects (e.g., miniatures,
models, forced perspective).
Some good examples of their shot-making capability can be seen in the
following movie and TV sequences. In Titanic, cinematographer Russell
Carpenter, ASC and visual-effects supervisor Erik Nash used a
Panavision/Frazier lens system and camera, each under motion control, to shoot
the beginning of the last sequence in the movie. Shortly after the woman drops
the gemstone into the ocean from the back of the research ship, a dry-for-wet
scene commences with the camera system approaching a model of the sunken
Titanic hulk (in dark blue lighting), then traversing over the bow of the ship
heading toward the port side, then entering and traveling through a covered
outside walkway, and eventually slowing to a halt after turning left to see two
doors with unlit glass windows, which then are lit and open to reveal a
congregation of people collected together to toast the lead actor and actress. In
this shot, which is far too complicated to describe fully (although it is worth
noting that CGI and a Steadicam rig are also involved), the large depth of field,
image-rotation control and pointing capability of the Frazier lens system are
utilized up to the point where the doors open. Another movie, The Rock, shot by
John Schwartzman, ASC, exemplifies the variety of shots that can be
accomplished with the Frazier lens system and some other specialty lenses.
Many of the shots are seen toward the end of the movie, when Nicolas Cage is
being chased on the Alcatraz prison walkways while carrying the deadly green
marbles and accidentally dropping, then grabbing, them on the parapet of the
lighthouse tower. In this shot, the macro focus (close-up of his feet) to the
infinity focus (San Francisco distant skyline) carry of focus (i.e., huge depth of
field with no follow focus) is clearly illustrated. For periscopic specialty lenses,
a good example can be seen in the introduction to public television’s
Masterpiece Theatre series, where the point of view given has the lens working
its way through table-top memorabilia in a Victorian-era drawing room,
eventually halting in front of a hardbound book with the title of the play about to
be presented on its front cover. This particular shot combines the maneuvering,
pointing, role and close-focus capabilities of the Kenworthy snorkel.
The above examples relate to specialty lens systems where different objective
or taking lenses, primes and zooms, can be attached to an optical unit which
basically relays the light through mirrors and prisms to form a final image, and
wherein pointing and image-rotation means are housed. Many other lenses or
lens systems, some under remote control, such as the pitching lens, can be used
to achieve similar results.
Other specialty lenses, such as slant-focus lenses and bellows-type lenses,
typically have fewer features than those afforded by the aforementioned lens
systems. However, they do offer the cinematographer opportunities to capture
other distinctive views of a scene. For example, the slant-focus lens permits
reduced depth of field limitations to be overcome in low-light-level scenes
where objects span across the field of view with continuously increasing focus
distance or with continuously decreasing focus distance, such as is found in the
typical car scene with the driver and passenger in conversation and the camera
looking into the car from either the passenger or driver window. Bellows-type
lenses can produce shots similar to slant-focus lenses, but because of their
bellows dependency cannot easily be adjusted in shot. However, the real
advantage of bellow lenses is their ability to produce a distorted field of view or
perspective. In other words, even though the film-format shape remains constant,
objects can be highly distorted and defocused differentially across the scene. For
example, the well-known THX rectangular-shaped credit could be made
trapezoidal in shape, with each corner defocused by different amounts.
All of these specialty lenses are currently very popular with cinematographers.
Their attributes are quite well-suited to movie making, and they are used to great
effect in TV commercials to get that difficult-to-obtain or different look.
LENS ATTACHMENTS
The most common cine lens attachments are placed before or after (and
occasionally within) a cine camera lens, be it a prime or zoom, spherical or
anamorphic.
Front lens attachments include diopter and split diopter close-focusing lenses,
image stabilizers, image shakers, distortion optics and inclining prisms. Their
main duty is to maximize the versatility of the lens to which they are attached.
Most of these front attachments lose some light transmission and, in some
instances, image quality, but their overall effect proves them to be worthwhile
even though they have deficiencies.
Rear lens attachments mainly include extenders (to increase or decrease the
overall lens focal length) and mesmerizers (rotatable anamorphic optics).
Extenders are well established in still photography. Their main property is to
increase (or even decrease) the overall focal length of the lens. There are three
disadvantages when using them: overall image quality may be somewhat
degraded; aperture is reduced at the rate of one stop per each 1.4x multiplier
(e.g., x2.0 extender = 1.4x1.4x, giving a two-stop loss); and the focus scale of
the lens will be incorrect by approximately the physical length of the extender. In
practical terms, the rule of thumb is that a good high-quality 1.4x rear extender
will not substantially affect the overall image quality, but a 2x or greater
extender will probably cause a significant reduction in image quality.
Technically, rear extenders should only be employed if no alternative prime or
zoom lens is available. Mesmerizers, due to their anamorphic nature, may
degrade image quality even further. In TV commercials this is not a problem, but
in moviemaking some caution should be exercised, such as stopping down the
aperture at least one stop.
One front lens attachment, which is quite commonplace in the video-lens
marketplace but has only recently been introduced in cine lenses, is the single or
double aspherically surfaced element. The idea here is to take a single,
negatively powered lens element with one or two nonspherical surfaces and
attach it to the front of a wide-angle prime or zoom lens so that the focal length
is decreased to 80% (or even as far as 66%) of its original value;
correspondingly, the field of view is proportionally increased, the image quality
is little changed. It should be pointed out that such attachments only work
properly with prime or zoom lenses that have a minimum full field of view of
about 75°. These front lens attachments offer an economical way to provide
increased wide-angle, short focal-length lenses not available in a lens series and,
by virtue of their aspherically surfaced lens design prescription, astigmatism,
distortion and lateral chromatic aberrations are kept under control. The only real
disadvantage of this wide-angle lens approach is the loss of focus capability,
because the front attachment requires the principal lens to be set at the end of its
close-focus range to reach the hyperfocal-focus distance of the overall system.
However, such wide-angle lenses’ large depth of field does not really warrant
any focusability. Examples of these front lens attachments currently being used
include a 6mm T2 prime covering the Super 16mm film format, which becomes
a 4.5mm prime (using a 0.75x front lens attachment), and a 14.5mm T1.9 prime
covering the Academy 35mm film format, which becomes an 11.5mm prime
(using a 0.8x front lens attachment). Due to the simplicity of these front lens
attachments, no significant stop loss is associated with their use.
FORMAT SIZE AND LENS ATTRIBUTES
Earlier, in Formats and Lenses, lens depth of field was discussed. There are
some other important implications resulting from format size on lens size
(volume and weight), image quality and aperture, as well as reflex mirrored
camera considerations.
Perhaps the most noticeable effect is the tendency for lens size to increase
almost linearly with each format size, assuming a constant focal length and
constant full aperture. Of course, this must partly happen in reflex camera
systems because the reflex mirror, size and position dictate the lens back-focal
length. Since the size and distance of the mirror from the film (or image plane)
increases mainly according to film-format size, this means that 65mm-format
lenses tend to be larger than 16mm-format lenses, at least at their rears.
However, the greater difficulty in covering a larger format, combined with the
need for larger depth of field, makes 65mm-format lenses slower, T2.8–T5.6 full
aperture, whereas 16mm format lenses are faster T1.4–T2.0. Unlike in still
photography, where a larger-format lens is chosen to provide a larger print, in
cinematography the 65mm, 35mm and occasionally 16mm prints are invariably
shown on similarly sized cinema screens, roughly 20–50 feet wide. In summary,
65mm-format cine lenses are optically less complex, slower and have slightly
less resolution per millimeter of film negative than 35mm- or 16mm-format
lenses. Nevertheless, 65mm lenses perform as well as they need to, based on the
size of the format. The key to the final presentation quality using large-format
lenses, including 65mm, VistaVision and 35mm anamorphic, is, as it always has
been, the larger format size and corresponding area of the negative.
SPECIAL APPLICATIONS OF LENSES
Special applications fall into the category of visual effects, which include
special effects, computer-graphics imaging (CGI), digital effects, animation, etc.
Most of these applications have one thing in common: they require principal
photography with lenses and film to achieve about 2,000 pixels of usable
information (and more recently up to 4,000 pixels) across the film format. Today,
the most-used and best cine lenses perform adequately and can meet this
requirement. Even anamorphic lenses, which get a bad name, are not pressed in
meeting this requirement. In fact, until quite recently, film stocks, because of
their chemical image processing for enhanced contrast in the negative, have
proved to be more problematic than lenses, especially in blue or green matte
screen applications.
Although modern cine lenses may be considered precision optical instruments
that were designed, manufactured, tested and calibrated in a highly scientific
manner, many of their characteristics, features and specific properties have not
been made available publicly or accessible to people working in special
applications. In the last five to 10 years, books or manuals containing some of
this information have appeared; however, further dissemination of lens
information is necessary.
To explain what some of this lens information is, a good method is to examine
a recently made movie with many special effects, such as Titanic. After camera
equipment was supplied to the production, the crew requested certain lens
information. In particular, variations of distortion and field of view (see
breathing earlier described) through focus and zoom and lens repeatability were
topics that kept arising. Eventually, this and other kinds of information were
released so that the desired special effects could be realized.
With the ever-increasing utilization of visual effects, a tremendous amount of
lens information will be not only required, but also indispensable in the making
of movies. Most of this information is not yet publicly available.
LENS DEPTH OF FIELD
Depth of field lens information is worth a special mention because until quite
recently, its method of calculation, especially at close- to medium-focus
distances (say 2–6 feet), has been somewhat inaccurate for wide-angle prime and
zoom lenses; and its interpretation for wide-angle lenses in general needs
clarification.
For modern, high-quality, wide-angle cine zoom lenses that are typically about
6–12 inches long, the lens length cannot be ignored when calculating the depth
of field. To properly calculate the depth of field, the first nodal, or conveniently
entrance pupil position of a wide-angle lens must be known. In most wide-angle
zoom and prime lenses they are normally a few inches back from the front of the
lens. So assuming that properly calculated depth of field tables for the lens in
question are not available, a good rule of thumb for a short focal-length lens is to
subtract the physical lens length (from front to flange/mounting point) away
from the object distance (i.e., object to film), and then look up the depth-of-field
table. It goes without saying that a lens at a close focus distance of 4 feet (object
to film), after the subtraction of, say, one foot, will produce a significant
difference in the depth of field from that expected—just look at any
contemporary depth of field table to compare 4 feet and 3 feet at T2 for a lens of
any focal length.
One common complaint about the depth of field observed for wide-angle
primes or zooms: when the focus distance is 6 feet and the expected depth of
field (according to the tables) is 4 to 9 feet, why is the defocused object at 4 feet
almost as sharp as the focused object at 6 feet, while the defocused object at 9
feet looks comparatively very soft? The answer has nothing to do with the lens
performance, the validity of the depth-of-field tables, or even lens physical
length; it simply has to do with the magnification of objects at different focus
distances. Since the object at 4 feet is only 11⁄2X larger than the object at 6 feet,
but the object at 4 feet is 21⁄4X larger than the object at 9 feet, then for the same
object, (say, a person’s face at all three focus distances, and all faces lit in a
similar manner), it is to be expected that the aforementioned result will appear.
In addition, it is worth noting that the introduction of even the weakest diffusion
filters, as well as soft lighting, will further compound and exaggerate this
condition.
LENS DESIGN AND MANUFACTURE
To the various suppliers of cine lens products, the design and manufacture
(including assembly, test and calibration) of cine lenses have been, and still is,
considered proprietary, as in intellectual property. A survey of cine-lens patents
from the 1950s through today (few in number) confirms the secretive nature
attached to cine-lens products by various suppliers. Quite a few technical and
scientific publications on some of the above aspects of cine lenses are available
and are listed under References toward the end of this manual. Most of them
concentrate on the optical design of cine lenses.
Over the years, several different approaches have been used to provide
suitable cine lenses to the cinematographic marketplace. More often than not, the
best approach has been to design custom cine lenses from scratch. In this case,
two substantially different lens development approaches have been chosen. The
first, which for the sake of discussion will be called the old-world method,
entails a company performing virtually all tasks in-house. The second, called a
new-world method, has the design and development process controlled at one
company (with the final assembly, testing and calibration performed there), but a
large number of specialized suppliers being subcontracted for partial design and
component manufacture—sort of like the way modern jet aircraft are produced.
Both approaches have advantages, the first having everything totally controlled
under one roof, and the second being better suited to exploit new technologies.
In terms of which approach produces the best cine lenses, only the results can
provide an answer. However, the latter approach is best suited to providing faster
development of new lenses.
In such a limited space it is not realistic to discuss all the aforementioned
aspects of cine lenses. The references we have mentioned, although mainly
containing theoretical information, such as MTF performance, are very useful in
gaining a fairly comprehensive understanding of optical aberrations and
numerous other optical (e.g., veiling glare) and mechanical (e.g., use of linear
bearings) considerations in the design of cine prime and zoom lenses. We
suggest that the reader peruse the listed references for a more detailed account of
this subject.
LENS ERGONOMICS
Unlike most still-photography lenses, which are purely driven by cost vs.
image-quality requirements, cine lenses have a strong ergonomic component.
This component is indirectly dependent on what the cinematographer expects of
a cine lens, but is directly related to what his or her crew, specifically camera
assistants, must do to obtain the desired shot. Since the assistant’s job may
depend on ergonomic aspects of the chosen cine lens, this little-discussed lens
aspect is certainly worth noting.
It is often believed that the quickness of sharp focus being attained in a cine
lens by rotation of the focus gear means that a sharp in-focus image of the object
has been achieved. However, the opposite is true, because it is best (and this has
been conclusively tested and will be verified by most camera assistants) for the
lens to have a lengthened focus scale with many spaced-apart focus calibration
marks (e.g., infinity, 60, 30, 20, 15, 12, 10, 9 feet, etc., continuing down to close
focus) so that the sharpness of focus may appear more difficult to reach, but the
likely error in focus is reduced. Camera assistants say that this ergonomic lens
aspect of providing an extended, well-spaced and clear focus scale, usually on a
large lens barrel, is crucial to providing a well-focused, sharp image.
The aperture or T-stop scale is also important, because it should have clear
markings (e.g., T2, 2.8, 4, 5.6, etc.) spaced well apart and preferably linearly
spaced for equal stop differentials. The ergonomics of the latter condition have
become more important lately due to motion-control-powered iris practices
including variable, aperture and frame-rate shots.
Zoom scales were until recently largely ignored in terms of the focal-length
markings and spacing on the lens zoom scale. On modern zoom-lens scales, the
normal cramping of long focal-length marks has been avoided through
optimization of zoom cam data so that long focal-length zoom mark spacings
have been expanded and short focal-length zoom markings have been
compressed (but not to adversely affect selection of short focal-length settings).
Most modern prime and zoom cine lenses can be expected to offer expanded
focus, aperture and zoom scales on both sides of the lens, i.e., dual scales. Also,
all of these scales should not be obscured by matte boxes, motors or other
ancillary devices.
Some other ergonomic or perhaps more practical aspects of cine lenses are
worth identifying. In prime or zoom lenses, mechanical precision and
repeatability of focus marks are important, especially in 35mm-format lenses of
short focal length, say 30mm or less. Typically, in a high-quality, wide-angle
prime lens, where the entire lens is moved to focus, the repeatability error due to
mechanical backlash, slop, high spots, etc., should be less than a half-thousandth
of an inch (i.e., 12 microns metrically). Correspondingly, in wide-angle prime or
zoom lenses where internal or “floating” lens elements are utilized, a similar
tolerance, sometimes smaller, needs to be achieved. The eventual tolerance
required in this case may be larger or smaller than that mentioned, depending
upon the focal length (or power) of the movable focus group(s).
Maintenance of line of sight or “boresight” in primes, but more so for zooms,
is a major ergonomic consideration, especially in visual-effects and motion-
control applications. The best quality zoom lenses today can be expected to
provide a maximum, or, worst case, line of sight variation through zoom of less
than one inch at a 12-foot focus distance.
In addition to what has been described, the camera assistant or any other lens
user demands that a cine lens not only provide accurate focus, aperture and zoom
control, but also the right feel. An analogous way to explain this is to consider
the turning of the frequency-control knob on a radio. Any movement of the knob
should be smooth, not rough or sticky, and yet require some force. The same
applies to the feel of the focus, aperture, zoom or other gears on a cine lens. For
the best cine lenses, this feel is taken for granted, but for many others an
uncertainty and lack of confidence about the lens is built up in the camera
assistant’s mind, and this is not conducive to good filmmaking.
FUTURE LENSES
From conception to final use, all modern cine lenses are heavily dependent on
computers. The complexity of modern cine lenses, whether optical, mechanical,
electronic, some combination thereof, or otherwise, is so great that computers
are not only needed, they are fundamental to producing high-quality cine lenses.
The advancement and gradual inroad made by zoom lenses in many fields,
including cinematography, is a testament to the importance of the computer in
the whole zoom-lens design process, from initial idea to final application.
Without computers, most modern zoom lenses would not exist.
A full series of prime lenses will still continue to be available for some time,
but increasingly, the prime lens will need to become more complex, with
features such as continuous focusing from infinity to close or even macro object
distances; internal, optically generated filtration effects; and so on, so that their
design and manufacturing costs, i.e., return on investment, remain economically
attractive. Individual prime lenses do offer one valuable advantage over zooms:
they fill niche or special lens requirements better, e.g. very wide angle and
fisheye field of view, very long focal lengths, slant focus, bellows, etc.
Therefore, in the future, cine lens complements will still include primes as well
as zooms and specialty lenses, but the mix will undoubtedly change.
Technological advancements in raw glasses, aspherical surfaces, thin films
(i.e., coatings), diffractive optics and other aspects of lenses will continue to fuel
lens development. A recently introduced compact, wide-angle, macro-focus
(continuously focusable from infinity down to almost the front lens element),
constant-aperture (through focus and zoom) cine zoom lens indicates what is to
come. This lens, which employs two aspherical surfaces, some of the most
exotic glasses ever produced, five cams, two movable focus groups, two
movable zoom groups, a movable iris and a total of twenty-three lens elements,
is a good example of what is already possible (See Figure 7). Given the high
performance level achievable by such a zoom, whether it be image quality,
breathing control, macro focusing or many other features, it is easy to understand
why zoom lenses have gained and will gain ground over traditional primes.
A good footnote is to consider where lenses have come from and where they
are likely to go in comparison to other technologies. Lenses were here before
film and will be here after film. Lenses were here before silicon-based electronic
computer processors and will be here after they are long gone! When optically
based, electronic-like computer processors arrive in the not too distant future,
lenses will still be around, and their development will still be thriving.
LENS DATA TABLES
The lens data tables provided in this manual have been compiled for many
35mm, 16mm, VistaVision and 8mm spherical cine lenses purely as a guide to
what might be expected in terms of lens depth of field, hyperfocal distance and
other characteristics. It is extremely important to note that only a real film test
will determine the actual characteristics of any lens.
For lens depth of field, a conservative circle of confusion of .0010 inches
1
( ⁄1000) has been consistently used throughout the lens data tables. However, with
modern cine equipment, lesser circles of confusion may be necessary. There are
two reasons for this. Firstly, most modern spherical cine lenses produce sharp
images mainly due to their high-contrast capabilities, and, secondly, most
modern film negative stocks exhibit similar high-contrast tendencies. So, for
such a lens and such a film stock, with hard lighting and no filtration, the
effective or apparent depth of field falls within a smaller range about the chosen
focus distance than shown in the lens data tables. Although this lens depth of
field scenario may be considered extreme, it seems to arise more and more often
in modern filmmaking, and as most cinematographers and their film crews will
attest, it is usually better to play it safe when dealing with lens depth of field.
Therefore, it is recommended that a reduced depth of field, up to half of what is
shown in the lens data tables, be adopted when extreme conditions arise.
In order to accommodate these needs (as indicated in the tables), the tables
enable you to calculate a circle of confusion of .0005 (5⁄10,000), or half the size
and twice as critical as listed under a given f-stop. Once you match your
focusing distance to the chosen f-stop, merely move two columns to the left in
order to see the depth of field value for a 5⁄10,000 circle of confusion.
The hyperfocal distances provided in the lens depth of field tables in this
manual indicate the focus distance to which a lens must be set so that the
(defocused) image quality performance is nearly equal for objects placed at
infinity and half the hyperfocal distance.
To conclude, it must be emphasized that no matter how rigorous the
mathematical calculations of lens properties or characteristics (including depth
of field) are, there will always be nonlens contributions, such as filters, film
stock, lighting, etc., and these must be considered in the final outcome. Only the
results of an actual film test can determine the depth of field that is acceptable to
the eye of the beholder.
Iain A. Neil is an Optical Consultant based out of Switzerland. His company
ScotOptix contracts globally with optical technology companies, providing
technical, business and intellectual property expertise with specialization in
zoom lenses, multi-configuration optical systems and new technology
implementation. He has more than 100 worldwide optically related patents
issued and applied for, has published and edited more than thirty papers and
books and has garnered eleven Academy Awards, two Emmys and the Fuji Gold
Medal. He has been active in the optics industry for over thirty-six years and is
currently a fellow member of SPIE and SMPTE, an associate member of ASC, a
member of OSA and a voting member of A.M.P.A.S.
1. Or, in the future, digital image detector, such as a charge-coupled device (CCD).
2. In the future, magnetic tape or magnetic/optical disk containing digital data.
3. Catadioptric still-photography lenses that utilize mirror surfaces (flat, spherical or aspherical), in addition
to refractive surfaces, offer the intrinsic advantage of folding the optical path back on itself twice, making
the lens compact in length, which is particularly attractive in narrow-angle, long focal-length lenses that
otherwise would be large and cumbersome (see Figure 2). Wide-angle, short focal-length catadioptric lenses
are uncommon because of the presence of large field-dependent aberrations that are difficult to correct in a
mirrored lens system.
4. The use of calcium fluoride is best avoided in telephoto cine lenses because it is highly sensitive to
temperature changes of even a few degrees Fahrenheit (or Celsius) and can be expected to produce
significant defocusing that may be troublesome in obtaining and maintaining sharp focus of objects over a
short period of time (1–5 minutes).
5. In some scenes, the streaking was introduced by visual effects using 65mm format lenses.
Camera Filters
C amera filters are transparent or translucent optical elements that alter the
properties of light entering the camera lens for the purpose of improving
the image being recorded. Filters can affect contrast, sharpness, highlight flare,
color and light intensity, either individually or in various combinations. They can
also create a variety of special effects. It is important to recognize that even
though there are many possibly confusing variations and applications, all filters
behave in a reasonably predictable way when their properties are understood and
experienced. Most of these properties relate similarly to filter use in both film
and digital imaging. The following will explain the basic optical characteristics
of camera filters, as well as their applications. It is a foundation upon which to
build by experience. Textual data cannot fully inform; there is always something
new to discover for yourself.
In their most successful applications, filter effects blend in with the rest of the
image to help get the message across. Use caution when using a filter in a way
that draws attention to itself as an effect. Combined with all the other elements
of image making, filters help make visual statements, manipulate emotions and
thought, and make believable what otherwise would not be. When used
creatively, with an understanding of their abilities, they can really get the viewer
involved.
CHANGING TECHNOLOGY
More than ever, changes in technology have fostered new applications,
considerations and formats for filters. Digital technology requires a new array of
spectrum and sensitivity concerns as well as opening up new imaging
opportunities, especially post-capture. Look for references to such changes
throughout this section.
FILTER PLANNING
Filter effects can become a key part of the look of a production, if considered
in the planning stages. They can also provide a crucial last-minute fix to
unexpected problems, if you have them readily available. Where possible, it is
best to run advance tests for preconceived situations when time allows.
SIZES, SHAPES AND ON-LENS MOUNTING TECHNIQUES
Lens-mounted filters are available in round and rectangular shapes and in
many sizes. Round filters generally come supplied with metal rings that mount
directly to the lens. Frugal filter users might find it preferable to use adapters
allowing the use of a set of filters of a single size with any lenses of equal or
smaller sizes. Round filters can also be supplied with self-rotating mounts where
needed, as for polarizers or Star effects. They can be readily stacked in
combination. Rectangular filters require the use of a special filter holder or matte
box. They offer the additional benefit of allowing slidability for effects that must
be precisely aligned within an image, such as graduated filters. In all cases, it is
advisable to use a mounting system that allows for sturdy support and ready
manipulation. In addition, the use of a lens shade at the outermost mounting
position (from the lens) will minimize the effect of stray off-axis reflections.
FILTER FACTORS
Many filter types absorb light that must be compensated for when calculating
exposure. These are supplied with either a recommended filter factor or a stop
value. Filter factors are multiples of the unfiltered exposure. Stop values are
added to the stop to be set without the filter. Multiple filters will add stop values.
Since each full stop added is a doubling of the exposure, a filter factor of 2 is
equal to a one-stop increase. Example: Three filters of one stop each will need
three additional stops, or a filter factor of 2 x 2 x 2 = 8 times the unfiltered
exposure.
When in doubt in the field about compensation needed for a filter that you
have no information on, you might use your light meter with the incident bulb
removed. If you have a flat diffuser, use it; otherwise just leave the sensor bare.
Aim it at an unchanging light source of sufficient intensity. On the ground, face-
up at a blank sky can be a good field situation. Make a reading without the filter.
Watch out for your own shadow. Make a reading with the filter covering the
entire sensor. No light should enter from the sides. The difference in the readings
is the compensation needed for that filter. You could also use a spot meter,
reading the same bright patch, with similar results. There are some exceptions to
this depending on the filter color, the meter sensitivity, and the target color, but
this method is often better than taking a guess.
Published filter-factor information should be taken as a starting point.
Differing circumstances may call for deviations from the norm.
FILTER GRADES AND NOMENCLATURE
Many filter types are available in a range of grades of differing strengths. This
allows the extent of the effect to be tailored to suit various situations. The grade
numbering range can vary with the effect type; generally, the higher the number,
the stronger the effect. Unless otherwise stated, there is no mathematical
relationship between the numbers and the strengths. A grade 4 is not twice the
strength of a grade 2. A grade 1 plus a grade 4 doesn’t add up to a grade 5.
Another possible source of confusion is that the various filter manufacturers
often offer filters with similar names, using terms such as “fog” and ‘diffusion,’
which may have different characteristics.
The oldest standard for naming filter colors was developed early in the 20th
century by Frederick Wratten and his associate, C.E. Kenneth Mees. While at
Kodak they sought to make early film capabilities more closely respond to
customer requirements. In doing so, they created specifications for a series of
filter colors that correspond to particular applications. They gave each color a
number, what we have since called Wratten numbers. These will be referenced
later in this article. Kodak makes gel filters that are the defining standard for the
Wratten system. Other manufacturers may or may not reference the Wratten
designations, even if they otherwise use their own numbering system.
Contact the various manufacturers for additional details about their filter
products and nomenclature.
CAMERA FILTERS FOR BOTH COLOR AND BLACK-AND-
WHITE
Ultraviolet Filters
Film often exhibits a greater sensitivity to what is invisible to us: ultraviolet
light. This is most often outdoors, especially at high altitudes, where the UV-
absorbing atmosphere is thinner, and over long distances, such as marine scenes.
It can show up as a bluish color cast with color film, or it can cause a low-
contrast haze that diminishes details, especially when viewing far-away objects
in either color or black-and-white. Ultraviolet filters absorb UV light generally
without affecting light in the visible region.
It is important to distinguish between UV-generated haze and that of airborne
particles such as smog. The latter is made up of opaque matter that absorbs
visible light as well as UV light, and will not be appreciably removed by a UV
filter.
Since the amount of UV encountered changes often, some cinematographers
find that they can obtain more consistent color outdoors through the regular use
of a filter that totally removes UV, along with their other filters. Some
manufacturers will offer combination UV absorbers with other effects on special
order for this purpose.
Ultraviolet filters come in a variety of absorption levels, usually measured by
their percent transmission at 400 nanometers (nm), the visible-UV wavelength
boundary. Use a filter that transmits zero percent at 400nm for aerial and far-
distant scenes; one that transmits in the 10%–30% range at 400nm is fine for
average situations.
Figure 1a. NO FILTER: Digital cameras can be overly
sensitive to light in the far red and infrared. Certain objects
have the property of reflecting a disproportionate amount of
light in this spectral region, which can result in dark objects
like the four black fabric samples in the center and at the left
of the chart appearing with a distinct reddish tint while the
two samples on the right remain neutral.
Digital imaging sensors typically are less sensitive than film to UV light and
have less need of UV-attenuating filters.
Neutral-Density Filters
When it is desirable to maintain a particular lens opening for sharpness or
depth-of-field purposes, or simply to obtain proper exposure when confronted
with too much light intensity, use a neutral-density (ND) filter. This will absorb
light evenly throughout the visible spectrum, effectively altering exposure
without requiring a change in lens opening and without introducing a color shift.
Neutral-density filters are denoted by (optical) density value. Density is
defined as the log, to base 10, of the opacitance. Opacitance (degree of
absorption) of a filter is the reciprocal of (and inversely proportional to) its
transmittance. As an example, a filter with a compensation of one stop has a
transmittance of 50%, or 0.5 times the original light intensity. The reciprocal of
the transmittance, 0.5, is 2. The log, base 10, of 2 is approximately 0.3, which is
the nominal density value. The benefit of using density values is that they can be
added when combined. Thus, two ND 0.3 filters have a density value of 0.6.
However, their combined transmittance would be found by multiplying 0.5 x 0.5
= 0.25, or 25% of the original light intensity.
Neutral-density filters are also available in combination with other filters.
Since it is preferable to minimize the number of filters used (see section on
multiple filters), common combinations such as a Wratten #85 (daylight-
conversion filter for tungsten film) with an ND filter are available as one filter,
as in the 85N6. In this case, the two-stop ND 0.6 value is in addition to the
exposure compensation needed for the base 85 filter.
There are two types of neutral-density filters in general use. The most
prevalent type uses organic dyes to attenuate light. For situations where it is
necessary to obtain the most even control from near-ultraviolet through the
visible spectrum into the near-infrared, a metallic vacuum-deposition coating,
often a nickel alloy, is ideal. For most situations, though, the silvered-mirror
appearance of these filters imparts internal reflections that need to be addressed
in use.
Special metallic coatings can also be employed for recording extreme bright-
light situations, such as the sun during an eclipse. These filters are very dense
and absorb substantially through the IR and UV range, as well as the visible, to
reduce the potentially blinding level of light. The best have a density value of
about 5.7, which allows less than 0.001% of the overall light through. Caution:
Do not use any filter to aim at the sun unless it is clearly labeled as having been
made for that purpose. Follow the manufacturer’s directions and take all possible
precautions to avoid accidental (and potentially permanent) blindness.
Ultimately, what you need to know is that there are infrared-neutral density
(IR-ND) filters made using various combinations of these techniques, the
dichroic hot mirror coatings and the far-red-absorbing dyes, that are offered by
various filter manufacturers that each work with a particular range of applicable
cameras. A key concern to be aware of is that they may or may not be able to be
stacked in combination depending on the specific filter types and application.
The matching filter specifications can change with each new camera
introduction. Keep abreast of developments in this area with your suppliers of
cameras and filters.
A relatively new entry offering a unique approach in this field is the Tessive
Time Filter. Essentially a controllable liquid crystal shutter panel that fits in a
standard matte box, its primary use is to eliminate certain undesirable motion
artifacts. Since it also acts as an electronically variable ND filter that attenuates
IR it can function as a variable IR-ND filter.
Figure 2b. GRADUATED NEUTRAL DENSITY 0.6
FILTER: Taken at the same midday time as the unfiltered
image, the filter absorbs two stops from the sky, allowing it
to appear correctly while the exposure is adjusted to
properly render the foreground through the clear half of the
filter. The soft transition between the clear and the ND
halves of the filter allow the effect to blend well together and
make for a more balanced image.
Graduated ND Filters
Often it is necessary or desirable to balance light intensity in one part of a
scene with another, namely in situations where you don’t have total light control,
as in bright exteriors. Exposing for the foreground will produce a washed-out,
overexposed sky. Exposing for the sky will leave the foreground dark,
underexposed
Graduated ND filters are part clear, part neutral-density, with a smoothly
graded transition between. This allows the transition to be blended into the
scene, often imperceptibly. An ND .6-to-clear, with a two-stop differential, will
sometimes compensate the average bright-sky-to-foreground situation.
These filters are also available in combination colors, where the entire filter is,
for example, a Wratten #85, while one half also combines a graded-transition
neutral-density, as in the #85-to-85N6. This allows one filter to fulfill the need
for two
Graduated filters generally come in three transition types. The most
commonly used is the soft-edge graduation. It has a wide enough transition area
on the filter to blend smoothly into most scenes, even with a wide-angle lens
(which tends to narrow the transition within the image). A long focal length,
however, might only image in the center of the transition. In this case, or where
the blend must take place in a narrow, straight area, use a hard edge. This is ideal
for featureless marine horizons. For situations where an extremely gradual blend
is required, an attenuator is used. It changes density almost throughout its length.
The key to getting best results with a graduated filter is to help the effect blend
in as naturally as possible. Keep it close to the lens to maximize transition
softness. Avoid having objects in the image that extend across the transition in a
way that would highlight the existence of the filter. Don’t move the camera
unless the transition can be maintained in proper alignment with the image
throughout the move. Make all positioning judgments through a reflex
viewfinder at the actual shooting aperture, because the apparent width of the
graduation is affected by a change in aperture.
Graduated filters are best used in a square or rectangular format in a rotating,
slidable position in a matte box. This will allow proper location of the transition
within the image. They can be used in tandem; for example, one affecting the
upper half, the second affecting the lower half of the image. The center area can
also be allowed to overlap, creating a stripe of the combination of effects in the
middle, most effectively with graduated color filters (see section on Graduated
Color Filters).
Polarizing Filters
Polarizers allow color and contrast enhancement, as well as reflection control,
using optical principles different from any other filter types. Most light that we
record is reflected light that takes on its color and intensity from the objects we
are looking at. White light, as from the sun, reflecting off a blue object appears
blue because all other colors are absorbed by that object. A small portion of the
reflected light bounces off the object without being absorbed and colored,
retaining the original (often white) color of its source. With sufficient light
intensity, such as outdoor sunlight, this reflected glare has the effect of washing
out the color saturation of the object. It happens that for many surfaces, the
reflected glare we don’t want is polarized, while the colored reflection we want
isn’t.
The waveform description of light defines nonpolarized light as vibrating in a
full 360-degree range of directions around its travel path. Polarized light in its
linear form is defined as vibrating in only one such direction. A (linear)
polarizing filter passes light through in only one vibratory direction. It is
generally used in a rotating mount to allow for alignment as needed. In our
example above, if it is aligned perpendicular to the plane of vibration of the
polarized reflected glare, the glare will be absorbed. The rest of the light, the
true-colored reflection vibrating in all directions, will pass through no matter
how the polarizing filter is turned. The result is that colors will be more strongly
saturated, or darker. This effect varies as you rotate the polarizer through a
quarter-turn, producing the complete variation of effect from full to none.
Polarizers are most useful for increasing general outdoor color saturation and
contrast. Polarizers can darken a blue sky, a key application, on color as well as
on black-and-white film, but there are several factors to remember when doing
this. To deepen a blue sky, the sky must be blue to start with, not white or hazy.
Polarization is also angle-dependent. A blue sky will not be equally affected in
all directions. The areas of deepest blue are determined by the following rule of
thumb: when setting up an exterior shot, make a right angle between thumb and
forefinger; point your forefinger at the sun. The area of deepest blue will be the
band outlined by your thumb as it rotates around the pointing axis of your
forefinger, directing the thumb from horizon to horizon. Generally, as you aim
your camera either into or away from the sun, the effect will gradually diminish.
There is no effect directly at or away from the sun. Do not pan with a polarizer
without checking to see that the change in camera angle doesn’t create
undesirable, noticeable changes in color or saturation. Also, with an extra-wide-
angle view, the area of deepest blue may appear as a distinctly darker band in the
sky. Both situations are best avoided. In all cases, the effect of the polarizer will
be visible when viewing through it.
Polarizers need approximately 11⁄2 to 2 stops exposure compensation,
generally without regard to rotational orientation or subject matter. They are also
available in combination with certain standard conversion filters, such as the
85BPOL. In this case, add the polarizer’s compensation to that of the second
filter.
Certain camera optical systems employ internal surfaces that also polarize
light. One example is the use of a videotap. Using a standard (linear) polarizer
may cause the light to be further absorbed by the internal optics, depending on
the relative orientation. This may interfere with the normal operation of the
equipment. The solution in these instances is to use a circular polarizer. The term
“circular” does not refer to its shape. Rather, it is a linear polarizer to which has
been added, on the side facing the camera, a clear (you can’t see it, you can only
see what it does) quarter wave retarder. This corkscrews the plane of
polarization, effectively depolarizing the light (after it has been through the
linear polarizer, which will have already had its effect on enhancing your image),
eliminating the problem. It is of critical importance, then, when using a circular
polarizer that it be oriented in the proper direction. That is, the retarder layer
must be on the side facing the camera. The filter must be either clearly labeled
by the manufacturer as to the correct direction, or mounted in a ring that only
threads on one way. Be careful when using a filter that mounts on the rear of a
lens, as in some wide-angle designs. Some lenses will allow the threaded filter
ring to mount in the wrong orientation. You can ensure the correct direction by
seeing that the direction the camera views through the filter is the one where the
filter functions as a normal polarizer. If turned the other way, it will not produce
the polarization effect. The circular polarizer otherwise functions in the same
manner as a standard linear one.
Polarizers can also control unwanted reflections from surfaces such as glass
and water. For best results, be at an angle of 33 degrees incident to the reflecting
surface. Viewing through the polarizer while rotating it will show the effect. It
may not always be advisable to remove all reflections. Leaving some minimal
reflection will preserve a sense of context to a close-up image through the
reflecting surface. For example, a close-up of a frog in water may appear as a
frog out of water without some telltale reflections.
For certain situations, it may be desirable to use a pair of polarizers to create a
variable-density filter. Although the transmission will vary as you rotate one
relative to the other, you should be aware of the resultant light loss, even at its
lightest point.
For relatively close imaging of documents, pictures and small three-
dimensional objects in a lighting-controlled environment, as on a copy stand,
plastic polarizers mounted on lights aimed at 45 degrees to the subject from both
sides of the camera will maximize the glare-reducing efficiency of a polarizer on
the camera lens. The camera in this case is aimed straight at the subject surface,
not at an angle. The lighting polarizers should both be in the same perpendicular
orientation to the one on the lens. Again, you can judge the effect through the
polarizer.
Diffusion Filters
Many different techniques have been developed to diffuse image-forming
light. Strong diffusion can blur reality for a dream-like effect. In more subtle
forms, diffusion can soften wrinkles to remove years from a face. The optical
effects all involve bending a percentage of the image-forming light from its
original path to defocus it.
Some of the earliest portrait diffusion filters still in use today are nets. Fine
mesh, like a stocking, stretched across the lens has made many a face appear
youthful, flawless. This effect can now be obtained through standard-sized
optical glass filters, with the mesh laminated within. These function through
“selective” diffusion. They have a greater effect on small details, such as
wrinkles and skin blemishes, than on the rest of the image. The clear spaces in
the mesh transmit light unchanged, preserving the overall sharp appearance of
the image. Light striking the flat surface of the net lines, however, is reflected or
absorbed. A light-colored mesh will reflect enough to either tint shadows lighter,
which lowers contrast, or its color while leaving highlight areas alone. The effect
of diffusion, however, is produced by the diffraction of light that just strikes the
edges of the mesh lines. This is bent at a different angle, changing its distance to
the film plane, putting it out of focus. It happens that this has a proportionately
greater effect on finer details than on larger image elements. The result is that
fewer wrinkles or blemishes are visible on a face that otherwise retains an
overall, relatively sharp appearance.
The finer the mesh, the more the image area covered by mesh lines and the
greater the effect. Sometimes, multiple filters are used to produce even stronger
results.
As with any filter that has a discrete pattern, be sure that depth of field doesn’t
cause the net pattern to become visible in the image. Using small apertures or
short focal length lenses make this more likely, as will using a smaller film
format such as 16mm vs. 35mm, given an equal field of view. Generally,
midrange or larger apertures are suitable, but test before critical situations. When
in need of net diffusion in circumstances where mounting it in front of the lens
will cause the pattern to show, try mounting the filter in a suitable location
behind the lens (if the equipment design allows). This should reduce the chance
of the pattern appearing. Placing a glass filter behind the lens may alter the back-
focal length, which may need readjustment. Check with your lens technician. A
test is recommended.
When diffusing to improve an actor’s facial appearance, it is important not to
draw attention to the presence of the filter, especially with stronger grades, when
diffusion is not required elsewhere. It may be desirable to lightly diffuse adjacent
scenes or subjects not otherwise needing it to ensure that the stronger filtration,
where needed, is not made obvious.
In diffusing faces, it is especially important that the eyes do not get overly soft
and dull. This is the theory behind what might be called circular diffusion filters.
A series of concentric circles, sometimes also having additional radial lines, are
etched or cast into the surface of a clear filterThese patterns have the effect of
selectively bending light in a somewhat more efficient way than nets, but in a
more radial orientation. This requires that the center of the circular pattern is
aligned with one of the subject’s eyes—not always an easy or possible task—to
keep it sharp. The rest of the image will exhibit the diffusion effect.
A variation on the clear-center concept is the center-spot filter. This is a
special-application filter that has a moderate degree of diffusion surrounding a
clear central area that is generally larger than that of the circular diffusion filter
mentioned previously. Use it to help isolate the main subject, held sharp in the
clear center, while diffusing a distracting background, especially in situations
where a long lens and depth-of-field differentiation aren’t possible.
Figure 3a.
NO FILTER – STANDARD EXPOSURE. Midday scene as it
appears without a filter.
Figure 3b.
BLACK PRO-MIST 1
STANDARD EXPOSURE
Midday scene with highlight haze more visually suggestive of
the sun’s heat and the humidity by the lake.
Figure 3c.
SUNRISE 3 GRAD PLUS
BLACK PRO-MIST 1,
one stop under.
Combining the hazy atmosphere of the Black Pro-Mist with the
color of the Sunrise Grad, underexposure produces a visual
sense of early morning.
Figure 3d.
TWILIGHT 3 GRAD,
two stops under.
The cool colors of the Twilight Grad plus a two-stop
underexposure produces a visual sense of early evening.
Another portrait diffusion type, long in use, involves the placement of small
lenslets or clear refracting shapes dispersed on an otherwise clear optical surface.
They can be round, diamond-shaped or otherwise configured. These are capable
of more efficient selective diffusion than the net type and have no requirement to
be aligned with the subject’s eye. They don’t lower contrast by tinting shadows,
as light-colored nets do. These lenslets refract light throughout their surface, not
just at the edges. For any given amount of clear space through the filter, which is
relative to overall sharpness, they can hide fine details more efficiently than net
filters.
The above types of filters, though most often used for portrait applications,
also find uses wherever general sharpness is too great and must be subtly altered.
Some diffusion filters, notably called “dot” filters, can effectively combine
image softening with the appearance of mild highlight flare and a reduction in
contrast. Although perhaps more akin to mist and fog effects, they also fall into
the category of diffusion filters.
Sliding Diffusion Filters
When attempting to fine-tune the application of diffusion within a sequence, it
can be invaluable to be able to vary the strength of the effect while filming. This
can be accomplished by employing an oversized filter that has a graduated
diffusion effect throughout its length. It is mounted to allow sliding the proper
grade area in front of the lens, which can be changed on camera. When even
more subtle changes are required, maintaining consistent diffusion throughout
the image while varying the overall strength, a dual opposing-gradient filter
arrangement can be used.
Color-Conversion Filters
Color-conversion filters are used to correct for sizable differences in color
temperature between the film and the light source. These are comprised of both
the Wratten #80 (blue, as used for daylight film in tungsten lighting) and the
Wratten #85 (amber, as used for tungsten film in daylight) series of filters. Since
they see frequent outdoor use in bright sunlight, the #85 series, especially the
#85 and #85B, are also available in combination with various neutral-density
filters for exposure control.
Light-Balancing Filters
Light-balancing filters are used to make minor corrections in color
temperature. These are comprised of both the Wratten #81 (yellowish) and the
Wratten #82 (bluish) series of filters. They are often used in combination with
color conversion filters. Certain #81 series filters may also be available in
combination with various neutral-density filters for exposure control.
Color-Compensating Filters
Color-compensating (CC) filters are used to make adjustments to the red, blue
or green characteristics of light. These find applications in correcting for color
balance, light source variations, different reversal film batches and other color
effects. They are available in density variations of cyan, magenta and yellow, as
well as red, blue and green filters.
Decamired® Filters
Decamired (a trademark of the manufacturer) filters are designed to more
easily handle a wide range of color-temperature variations than the previously
mentioned filters. Available in increments of both a red and a blue series,
Decamired filters can be readily combined to create almost any required
correction. In measuring the color temperature of the light source and comparing
it to that for which the film was designed, we can predict the required filtration
fairly well.
A filter that produces a color-temperature change of 100° K at 3400° K will
produce a change of 1000° K at 10,000° K. This is because the filter relates to a
visual scale of color. It will always produce the same visible difference. A color
change of 100° K at the higher temperature would hardly be noticed.
To allow simple calculation of such differences, we convert the color
temperature into its reciprocal, that is, to divide it into 1. Then, since this is
usually a number with six or more decimal places, we multiple it by 106, or one
million, for convenience. This is then termed the “mired value,” for micro
reciprocal degrees. It identifies the specific change introduced by the filter in a
way that is unrelated to the actual temperature range involved.
To see this more clearly, let’s look at the following changes in color
temperature from both the degree and mired differences. Numbers are degrees
Kelvin, those in parentheses are mireds:
9100 (110) to 5900 (170) = difference of 3200 (60)
4350 (230) to 3450 (290) = difference of 900 (60)
4000 (250) to 3200 (310) = difference of 800 (60)
From this, you can see that although the degree differential varies as the range
changes, the actual filtration difference for these examples, in mireds, is the
same.
To use this concept, subtract the mired value of the light source from that of
the film (the mired value of the Kelvin temperature the film is balanced for). If
the answer is positive, you need a reddish filter; if negative, use a bluish filter.
Mired-coordinated filters are termed “Decamireds.” Mired value divided by 10
yields Decamireds. The 60 mired shifts (above) would be produced by an R6
filter, where the higher values were that of the lighting. Sets of such filters come
in values of 1.5, 3, 6 and 12 Decamireds, in both B (bluish) and R (reddish)
colors. These numbers are additive; that is, a pair of R3s produces an R6. An R6
plus a B6 cancel each other out to produce a neutral gray.
Stripe filters are another type of graduated filter, having a thin stripe of color
or neutral density running through the center of the filter, graduating to clear on
either side. These are often used to horizontally paint various colors in layers
into a sky, as well as for narrow-area light balancing.
When seeking to gradually alter the color or exposure of a scene on-camera,
an extra-long graduated (attenuated) filter, mounted to slide in front of the lens,
can accomplish what might otherwise be a complicated task. For example, going
from the appearance of daylight to that of twilight can be synthesized with a day-
for-night filter in combination with a neutral density, both gradually altering
density in the one filter. With careful lighting, this can achieve a realistic change-
of-time look with a minimum of difficulty, as both exposure and color are altered
together in a controllable manner
Coral Filters
As the sun moves through the sky, the color temperature of its light changes. It
is often necessary to compensate for this in a variety of small steps as the day
progresses, to match the appearance of different adjacent sequences to look as
though they all took place at the same time. Coral filters are a range of graded
filters of a color similar to a #85 conversion filter. From light to heavy, any effect
from basic correction to warmer or cooler than normal is possible. Corals can
also compensate for the overly cool blue effect of outdoor shade.
Sepia Filters
People often associate sepia-toned images with “early times.” This makes
sepia filters useful tools for producing believable flashbacks and for period
effects with color film. Other colors are still visible, which is different from
original sepia-toned photography, but appear infused with an overall sepia tint.
Didymium Filters
The Didymium Filter is a combination of rare earth elements in glass. It
completely removes a portion of the spectrum in the orange region. The effect is
to increase the color-saturation intensity of certain brown, orange and reddish
objects by eliminating the muddy tones and maximizing the crimson and scarlet
components. Its most frequent use is for obtaining strongly saturated fall foliage.
It also enlivens brick and barn reds, colors that aren’t bright scarlet to begin
with, most effectively. The effect is minimal on objects of other colors. Skin
tones might be overly warm. Even after subsequent color timing or correction to
balance out any unwanted bias in these other areas, the effect on reddish objects
will still be apparent. Prior testing should be done because film color
sensitivities vary.
LL-D®
The LL-D was designed to help in the above situation. It requires no exposure
compensation and makes sufficient adjustments to the film to enable the timer to
match the color of a properly 85-filtered original. It is not an all-around
replacement for the #85. Use it only where needed for exposure purposes, and
for subsequently printer-timed work.
There are also Green, Red and Blue viewing filters used to judge lighting
effects when doing process work like greenscreen.
Figure 5b. WITHOUT THE SPLIT-FIELD LENS (2): The
foreground here is sharp when the background is out of
focus. You can’t focus on both at the same time.
Figure 5c. WITH THE SPLIT-FIELD LENS: The close-up
lens half allows sharp focus on the foreground while the
camera lens is focused on the background. There is a soft
transition between the two areas at the edge of the split-field
lens at this middle-of-the-range lens opening.
Secondary Reflections
Lighting can cause flare problems, especially when using more than one filter.
Lights in the image pose the greatest difficulties. They can reflect between filter
surfaces and cause unwanted secondary reflections. Maintaining parallelism
between filters, and further aligning the lights in the image with their secondary
reflections where possible, can minimize this problem. In critical situations, it
may be best to make use of a matte box with a tilting filter stage. Tilting filters of
good optical quality only a few degrees in such a unit can divert the secondary
reflections out of the lens axis, out of the image, without introducing unwanted
distortion or noticeable changes in the filter’s effect.
Rain Spinners
When rain or boat spray send distracting water drops onto the lens, you can
mount a rain spinner, sometimes called a rain deflector, as you would a matte
box on the front of the lens. A round filter, usually clear, spins at over 3000 rpm
and flings off droplets before they form. This is very effective when filming the
Tour De France in the rain.
T here is good reason why the medium discussed in this manual is popularly
known as “motion” pictures. Since the earliest days of the industry,
cinematographers have been challenged to serve their stories by finding new and
inventive ways of moving the camera. This section examines the latest devices
that allow the operator to maintain control of the image while taking the viewer
on a ride up, down or anywhere else.
BODY-WORN SYSTEMS
Modern camera-stabilizing systems enable a camera operator to move about
freely and make dolly-smooth handheld shots without the restrictions or the
resultant image unsteadiness encountered with prior methods. These systems
transfer the weight of the camera unit to the operator’s body via a support
structure and weight distribution suit. This arrangement frees the camera from
body motion influences. It allows the camera to be moved by the operator
through an area, generally defined by the range through which his arm can
move.
Camera smoothness is controlled by the “hand-eye-brain” human servo
system that we use to carry a glass of water around a room or up and down
stairs. Viewing is accomplished through the use of a video monitor system that
displays an actual through-the-lens image, the same image one would see when
looking through a reflex viewfinder. The advantage of these camera-stabilizing
systems is that the camera now moves as if it were an extension of the operator’s
own body, controlled by his or her internal servo system, which constantly
adjusts and corrects for body motions, whether the operator is walking or
running. The camera moves and glides freely in all directions—panning, tilting,
booming—and all movements are integrated into a single, fluid motion that
makes the camera seem as if it were suspended in midair and being directed to
move at will. These camera-stabilization systems turn any vehicle into an instant
camera platform.
As with remotely controlled camera systems, servo controls may be used for
control of focus, iris and zoom on the camera lens.
ADVANCED CAMERA SYSTEMS, INC.
BodyCam
The BodyCam camera support system moves the camera away from the
operator’s eye by providing a monitor for all viewfinding. Once separated that
way, the camera is able to go into places and point in directions that would not
be possible if the operator’s head were positioned at the eyepiece. The lens can
be placed anywhere the arm can reach. This also allows the operator’s unblocked
peripheral vision to see upcoming dangers and obstacles.
The BodyCam prevents unwanted motion from being transferred to the
camera. With the average video camera, lens focal lengths up to 125mm are
possible, and on 35mm film cameras, focal lengths up to 250mm are usable.
The Balance Assembly is suspended in front of the operator. It supports the
camera, a viewfinder monitor and one or two batteries. The monitor is the green
CRT-type. Batteries are brick-style and last two to three hours.
Adjustments allow precise balancing of the Balance Assembly around a three-
axis gimbal. At this gimbal is an attach point where the Balance Assembly
connects to the Suspension Assembly.
The Suspension Assembly consists of an arm supported and pivoted behind
the left shoulder. The rear part of the arm is attached to a double-spring
arrangement that counters any weight carried by the other end of the arm and
provides vertical movement over a short distance. This arm isolates higher
frequency motion. The front end of the arm attaches to the Balance Assembly
with a hook. This hook is connected to a cable that travels from the front of the
arm to the pivot point of the arm and an internal resilient coil.
The Suspension Assembly is attached to the Backbrace. The Backbrace is a
framework that carries the load and transfers it to the human body.
Model L weighs 24 lbs (10.9kg) without camera or batteries and will carry up
to 25 lbs (11.3kg) of camera package (not counting batteries). Model XL weighs
28 lbs (12.7kg) without camera or battery and will carry up to 40 lbs (18.2kg)
without battery. Both models offer options for wired or wireless focus, zoom and
iris control.
GLIDECAM INDUSTRIES, INC.
Glidecam V-20
The Glidecam V-20 professional camera stabilization system was constructed
primarily for use with 16mm motion picture cameras and professional video
cameras weighing 15–26 lbs.
The lightweight, adjustable Support Vest can be adjusted to fit a wide range of
operators. High-endurance, closed-cell EVA foam padding and integral T-6
aluminum alloy create a vest which can hold and evenly distribute the weight of
the system across the operator’s shoulders, back and hips. For safety, quick-
release, high-impact buckles allow the vest to be removed quickly.
The adjustable, exoskeletal Dyna-Elastic Support Arm is designed to
counteract the combined weight of the camera and Camera Mounting Assembly
(Sled) by employing high carbon alloy springs. The arm can be boomed up and
down, as well as pivoted in and out and side to side. The spring force is field
adjustable to allow for varying camera weights. For safety, dual-spring design is
used to reduce possible spring failure damage.
The free-floating Three-Axis Gimbal, which incorporates integrally shielded
bearings, creates the smooth and pivotal connections between the front end of
the arm and the mounting assembly. A locking mechanism allows the gimbal to
be placed at varying positions on the Central Support Post.
The Camera Mounting Assembly (Sled) is designed with a lower Telescoping
Center Post which allows for vertical balance adjustment as well as varying lens
heights. The center post can be adjusted from 22″ to 32″. The Camera Plate has
both 1⁄4″ and 3⁄8″ mounting slots to accommodate a variety of camera bases. For
remote viewing, a LCD Monitor can be attached on the Base Platform, or either
a LCD or a CRT Monitor can be attached to the base’s adjustable monitor
bracket. The base platform can also be configured to use counterbalance weight
disks if a monitor and/or battery is not mounted on the base platform. The back
of the base platform has threaded mounting holes for an Anton Bauer Gold
Mount battery adapter plate.
Accessories include a low-mode camera mount, sled offset adapter, vehicle
mount, and Vista Post 33″ central support post extender.
Glidecam V-16
The V-16 is the same as the V-20 but designed to support lighter cameras
weighing 10–20 lbs.
Glidecam Gold Series
The top-of-the-line Gold Series is made up of the Gold Vest and the Gold
Arm. The Gold Vest offers no-tools adjustment with a breakaway safety system.
It has a quick pressure-release dual-buckle design with positive locking buckles
and fast-ratcheting adjuster buckles. The vest features integral black anodized T-
6 aluminum with EVA foam padding, reversible and vertically adjustable arm
mounting plate, and an industry standard arm connector. An optional V-series
arm connector is available. Arm connectors are made of titanium.
The Gold Arm incorporates six titanium springs in order to handle a camera
load of 13–38 lbs. Its combined camera and sled carrying capacity ranges from
31–56 lbs. Unique to the Gold Arm are its Hyper-Extension Hinges, which allow
the arm more freedom of movement. The vest connectors are made of titanium.
GEORGE PADDOCK INC.
PRO-TM
The PRO-TM System’s Donkey Box II camera mounting platform has linear
slide bearings. The metal-on-metal design allows for smoother, easier movement
fore/aft and side/side. Finer-thread lead screws and captured lead nuts enable the
operator to make smaller, more accurate adjustments. A quick-release
mechanism allows the camera to be mounted/dismounted with ease.
The PRO Gimbal is compatible with all 1.5″ center posts and existing
oversized grips. A locking mechanism achieves concentric clamping about the
post, ensuring that all axes converge at the post’s center.
The PRO Battery Module II batteries provide an independent and clean power
supply to the monitor and video-related accessories while eliminating the need
for a converter when running 24V cameras. The Sled can be configured to carry
one to three batteries in multiple combinations to support a wider range of
camera systems.
The PRO Arm, a departure from traditional tensioning methods has resulted in
minimal friction and a force curve designed to complement an operator’s
instincts. It can be easily configured to accommodate a wide variety of load
requirements ranging from 13–75 lbs.
The Post system was designed to eliminate all external cables except the
monitor cable. The Upper Junction Box and the 171⁄2″–261⁄2″ extendable post,
which houses one internal cable, connects to the lower Junction Box via a quick
disconnect/connect bayonet mount.
The Superpost allows the operator to achieve super hi/low-mode shots. It
extends from 51″–60″ (5′) and is of the same design configuration as the PRO-
TM Post.
The PRO Vest offers increased comfort due to improved load distribution. It is
designed to conform anatomically to the individual operator and allows the
operator a greater choice of hip and/or shoulder load distribution. A
revolutionary latching system permits vest tension to be relaxed between takes
without a change of settings.
The 5″ Diagonal High Intensity Monitor II is a self-contained design that
incorporates all electronic components within a water-resistant housing. The
high-voltage, high-intensity screen features familiar onscreen graphics and
indicators such as framelines, crosshairs and low battery and level indicators.
Controls for brightness, contrast, image orientation, standard/anamorphic and
graphics are all located on the faceplate for easy adjustment.
Nickel Metal Hydride Batteries deliver 14.4V @ 3.5A hours. A Fuel Gauge
LED display on each battery provides immediate readout of battery condition.
Weight is 1.9 lbs per battery.
The Pro Gyro Module is designed specifically for the PRO Battery Sled II.
The new Gyro system increases the configuration possibilities of one to three
gyros, while decreasing the time and complexity involved in transitioning to and
from the use of gyros. Keeping one or two batteries on the sled leaves sled
power in place, eliminating the need for an obtrusive umbilical cable. The
umbilical cable is replaced by a small, lightweight AC-only cable. A battery belt
carries one battery and the DC/AC inverter.
MK-V
MK-V Evolution
The MK-V Evolution Modular Sled system can evolve with the operator. With
the addition of optional extras, the basic MK-V Evolution rig can become a 24V
35mm sled. The base features a modular framework, allowing for customization
in the field to any configuration and battery system and the addition of various
accessories. It has a 4-by battery bar and clamp system for stability and universal
battery compatibility. The modular base is also compatible with other leading
stabilization systems. All battery systems, 12V and 24V, plug into the D-box
distribution box. The D-box Deluxe has built-in digital level sensing and an
onscreen battery display. MK-V’s latest, highly configurable Nexus base is
compatible with the Evolution.
The available carbon-fiber posts are two- and four-stage telescopic and
available in three different lengths. The base has optional gyro mounts. The
modular telescopic monitor works with CRT and LCD monitors. The V2
advanced, frictionless gimbal is compatible with leading sled systems, tools-free
and designed to have no play in any axis. Front-mounting standard and advanced
Xo vests are available. The Xo arm is frictionless and can support cameras of
any weight with the change of its springs and features a modular front end.
SACHTLER
Artemis Cine/HD
The Artemis Cine/HD Camera Stabilizing System’s post has a telescopic
length ranging from 16.5″–27.5″ (42–70cm). A variety of modules can be fixed
to either end of the post to enable cameras, monitors and battery packs to be
mounted. Scales on both tubes allow fast and precise adjustment. The gimbal’s
self-centering mechanism ensures aligned, precise position of the central bearing
for easy system balancing.
The side-to-side module situated between post and camera gives the camera a
sliding range of 1.18 (30mm) left, right, forward and back and includes a built-in
self-illuminating level at the rear of the module. Two positions at the front and
two at the rear enable mounting of remote focus brackets. All adjustments or
locking mechanisms can be performed tool-free or with only a 4mm Allen
wrench. The Artemis vest, made from materials such as cordura, has a full-
length pivoting bridge made of reinforced aluminum alloy. It can be vertically
adjusted independently of body size. Most of the weight is carried on the hips,
reducing pressure on the operator’s back. The arm is available with different
spring sets that offer a load capacity ranging from 30–70 lbs (15–35kg). The two
standard arm capacities are 44 lbs (20kg) and 57 lbs (25kg). It is fully
compatible with all other stabilizer systems using the standard arm-vest
connector and a 5⁄8″ arm post.
TIFFEN STEADICAM
Universal Model III
The Steadicam system consists of a stabilizing support arm which attaches at
one end to the camera operator’s vest and at the other end to a floating camera
mounting assembly which can accept either a 16mm, 35mm or video camera.
The comfortable, adjustable, close-fitting camera operator’s vest transfers and
distributes the weight of the Steadicam system (including camera and lens)
across the operator’s shoulders, back and hips.
The stabilizer support arm is an articulated support system that parallels the
operator’s arm in any position, and almost completely counteracts the weight of
the camera systems with a carefully calibrated spring force. The double-jointed
arm maximizes maneuverability with an articulated elbow hinge. One end of the
arm attaches to either side of the vest’s front plate, which can be quickly
reversed to allow the stabilizer arm to be mounted on the right or left side of the
plate. A free-floating gimbal connects the stabilizer support arm to the camera-
mounting assembly.
The camera-mounting assembly consists of a central support post, around
which the individual components are free to rotate as needed. One end of the
post supports the camera mounting platform, while the other end terminates in
the electronics module. The adjustable video monitor is attached to a pivoting
bracket. An electronic level indicator is visible on the CRT viewing screen.
Electronically generated framelines can be adjusted to accommodate any aspect
ratio. Positions of the components may be reversed to permit “low mode”
configuration. The Steadicam unit is internally wired to accept wireless or cable-
controlled remote servo systems for lens control. A quick-release mechanism
permits the operator to divest himself of the entire Steadicam unit in emergency.
A 12V/3.5A NiCad battery pack mounts on the electronics module to supply the
viewfinder system and film or video camera.
Master Series
The cornerstone of the Steadicam family, this high-end production tool
incorporates a new motorized stage, an enhanced Master Series vest, an
advanced 5″ monitor with artificial horizon and every state-of-the-art feature in
the industry today.
System Includes: Camera Mounting Chassis (Sled), Motorized Stage, Low-
Friction Gimbal, Composite Center Post and Covers, Advanced Monitor: 5″
CRT w/16:9 aspect ratio switchable to 4:3, Artificial Horizon, Frameline
Generator, On-screen Battery Display, Dovetail Accessory Bracket, 20–45 lbs
camera capacity Iso-Elastic Arm, No-tools Arm Post, Master Series Leather
Vest, four CP batteries with built-in Meter Systems, 12V/24V DC to DC Power
Converter, Dynamic Spin & Docking Bracket, Long and Short Dovetail Plates,
12V power cable, 24V power cable (open-ended), two power cables for
Panavision and Arri (24V) cameras, two power/video cables for Sony XC-75,
soft case for the Arm, soft case for the Vest, hard case for the Sled, Owner’s
Manual.
DOGGICAM SYSTEMS
The Doggicam
The Doggicam is a self-contained, lightweight and fully balanceable handheld
camera system. In low mode, shooting forward allows dynamic tracking free of
tracks, and shooting backwards, the operator can view the monitor to see where
he is going while leading the subject. High mode allows the operator to move the
camera in an extremely dynamic fashion. Configuring from low mode to high
mode takes 3–4 minutes.
At the heart of the Doggicam system is a lightweight, 6.5 lbs, highly modified
Arri 2-C with crystal speeds from 4–60 fps, PL or Panavision mounts and reflex
viewing. The camera may be set up for Super 35 or anamorphic formats. It is
easily adapted for 16mm cameras. Handheld bracketry is available. The system
is readily integrated for studio mode use by adding a riser block, rotating
eyepiece and video door.
Robodog
This motorized version of the Doggicam allows the operator full-tilt control of
the camera by the handle-mounted motor controller. The inherent stability of the
system is maintained by keeping the post vertical while allowing the full range
of camera tilt within the yoke.
Doggimount
The Doggimount quickly secures the camera at any point in space using a
compact aluminum framework similar to speedrail, yet much smaller.
Bodymount
Bodymount uses a specially designed vest and the standard Doggimount
bracketry to allow easy and comfortable attachment of the camera to a person.
The vest can be covered with wardrobe to allow framing to the middle of the
chest.
Pogocam
The Pogocam is a small, weight-balanced handheld system that utilizes a
converted 35mm Eyemo (100 daylight loads) with a videotap and onboard
monitor. Nikon lenses are used. The compact size allows you to shoot where
most stabilization systems would be too cumbersome to go.
MITCHELL-BASED PLATFORM-MOUNTED
STABILIZATION SYSTEM
COPTERVISION
Rollvision
Rollvision is a lightweight, portable, wireless 3-axis camera system with 360°
unlimited pan, tilt and roll capabilities and a gyro for horizon compensation. The
dimensions are 14.5″ x 19″ x 19.5″ with a weight of 16 lbs (camera system
only). Rollvision works on numerous platforms including, but not limited to,
cable systems, cranes, jib arms, camera cars, Steadicams, tracking and rail
systems and much more. Mounting options consist of a 4″ square pattern 1⁄4″–20,
Mitchell Mount adapter receiver 1⁄4″–20 and two adjustable T-nuts sized from
1
⁄4″–20.
The Rollvision features electronic end stops, an hour meter with total system
running time, lock for pan and tilt, removable sphere covers, film and video
mode switch, and voltmeter.
Camera options include: Arri 435, 35-3, 2-C, 2-C SL Cine, 16SR-2 and 16SR-
3, Aaton XTRprod and A-Minima plus multiple Sony and Panasonic High-
Definition cameras.
Lenses up to 6 lbs can be used, including but not limited to, Panavision
Primos from 17.5mm to 35mm, Ultra, Super and Normal Speeds; 35mm primes;
and 16mm primes. Lens mount options include Arri standard, lightweight PL
and lightweight Panavision mounts.
Several types of film magazines can be used including 400′ SL Steadicam
Mag, 200′ Standard and 200′ Steadicam Mag.
Remotevision,also designed by Coptervision, is the wireless, 4-in-1 radio
control used to operate the Rollvision. Remotevision features a 5.6″ color
monitor and pan, tilt, and roll controls as well as camera zoom and iris controls.
Gyro on/off and roll/record switches are included. The speed of the pan, tilt and
roll is adjustable and a reverse switch allows the head to reset to the zero
position.
The Rollvision fits in one to two ATA-approved size cases, and travels with
the Rollvision operator as luggage.
GLIDECAM INDUSTRIES, INC.
Gyro Pro
The Gyro Pro camera stabilizer units liberate you to film while moving in any
direction, even over rough ground, on rough rivers, in high seas or turbulent air.
Accommodates most cameras weighing up to 100 lbs. Sets up in 20 minutes and
features 360° pan, 60° tilt and 30° roll. 36V DC power is required. Ships in three
cases totaling 310 lbs.
LIBRA HEAD, INC.
Libra III
The Libra III is a three-axis, digitally stabilized camera mount. It can be
operated in either direct (digital) mode, or the operator can choose to stabilize
one, two or all three axis at the flick of a switch. Custom-designed motors give it
the precision of a well-tuned geared head that is extremely stable. Wet and
muddy conditions are well tolerated. It fits on all industry-standard cranes and
gripping equipment and either can be suspended or mounted upright. Fail-safe is
comprised of two-systems-in-one electronics. It pans 350°, tilts 90° and rolls
90°. The Libra III comes with Preston FIZ remote-control system, a lens witness
cam, 14″ and 8″ Sony color monitors, and AC and DC power supplies.
MOTION PICTURE MARINE
Hydro Gyro
A state-of-the-art digital-stabilization head that enables shot-making in the
harshest of filming environments. The Hydro Gyro mounts under your pan/tilt
head or camera. The result is a stable horizon without the pitch and roll
generated by camera boats, cars, aircraft, dollys or other moving platforms. It
weighs 32 lbs with a shipping weight around 70 lbs. It handles camera systems
up to 150 lbs and can use lenses up to 500mm. The Hydro Gyro is waterproof up
to 30. Fine vibration can be reduced with a Hydro Gyro Shocksorber, a
lightweight, 2″ high, anti-shock and vibration damping system.
MINIATURE AERIAL SYSTEMS
COPTERVISION
CVG-A
The CVG-A is an autonomous, unmanned, small-scale helicopter platform and
camera system. Possessing the same characteristics as Coptervison’s line-of-
sight, remote control helicopter (5′ in length with a 6′ blade span and weighing
40 lbs), the CVG-A incorporates a 3-D GPS waypoint navigation and flight-
control system which is programmed through a computerized Ground Control
Station (GCS).
Two different camera systems are available—the standard Coptervision three-
axis, gyro-stabilized camera system that carries 35mm, 16mm and digital video
cameras and/or a new smaller, three-axis, gyro-stabilized gimbal designed by the
company called Flexvision that carries smaller video cameras. The 35mm
camera package consists of a modified Arri 2-C and comes with either an Arri
Standard mount, PL mount or Panavision mount. Lenses include 16mm and
24mm Zeiss prime lenses, with an optional 40mm anamorphic or Cooke S4
Wide Prime lens.
Any given flight plan can be preprogrammed, flown and repeated as many
times as is necessary thus allowing flight patterns to be memorized. This feature
allows for scalable shots that can be used in visual effects plates. A “return to
home” command can be input into the flight parameters so that the helicopter
will automatically return back to its starting position if necessary.
Like the Coptervision line-of-sight helicopter, the CVG-A is modular in
design and has all of the same flight characteristics: up to 75 mph forwards, 45
mph backwards, 35 mph side to side, and hover. Since a digital microwave
downlink is used to bring the signal down to the ground, the CVG-A camera
operator can maintain precise framing while the helicopter is in motion. The
system runs on aviation-grade fuel and does not emit smoke from the exhaust
while flying backwards. The modular design allows for ATA-approved boxes so
that a total of six cases travel with the crew as luggage. With the CVG-A, there
is increased range (up to 3 miles and 450′) and endurance (from 30–45 minutes).
Flying-Cam
The Flying-Cam is an unmanned aerial filming helicopter. The Flying-Cam
aeronautical design, manufacturing quality, safety features and maintenance
procedures are equivalent to those used in General Aviation. The vehicle has all
the characteristics required for safe operation close to actors. The system is self-
sufficient and combines the use of aeronautics, electronics, computer vision and
wireless transmission technologies that are tailor made to fit in a 30 lbs take-off
weight platform and a 6′ main rotor diameter. Camera available are: 16mm,
S16mm, 35mm, S35mm and various Video standards including HD and
Broadcast live transmission.
The embedded Super 35mm 200′ camera is custom made by Flying-Cam and
mounted in a 1′ diameter, three-axis gyro-stabilized remote head. The camera is
the integration between an Arri 2-C and an Eyemo. The movement has been
designed to achieve the same steadiness of an Arri 35-3 up to 30 fps and of an
Arri 2-C up to 50 fps. The 200′ magazine use standard core and is part of the
camera body. When short reloading time is requested, the cameras are hot-
swappable. The electric camera motor is crystal-controlled with two-digit
precision. Ramping is optional. The minimum speed is 4 fps, the maximum 50
fps. Shutter is 160°. Camera trigger is remote. Indicators and readouts—timer,
sync signal and roll out—are monitored from the ground control station. A color
LCD monitor and a digital 8mm tape recorder are provided for monitoring
playback on the ground. A color wireless video assist, used as parallax, gives
peripheral information, allowing for anticipation in the camera remote head
operation. A frameline generator is provided with prememorized ratio. The
mattebox used is an Arri 3″x3″. ND3-6-9, 85ND3-6-9, Pola and 81EF filters are
provided as standard. Aperture Plate is full and optical axis is centered on full.
Available lenses are wide-angle: Cooke, Zeiss and anamorphic 40mm. Aperture
is remote, focus remote is optional on standard lens and included on
Anamorphic. Lens mounts are Arri Standard, PL, BNCR, and Panavision.
The HD Camera is a 3-CCD with HD-SDI output and onboard recording
capability.
The Flying-Cam gyro-stabilized patented Remote Head includes one-of-a-
kind top-shot horizon control. Pan: 360° unlimited, 180°/sec adjustable from
ground. Roll: 360° unlimited, 60°/sec adjustable from ground. Tilt: 190º
including 90º straight up and 100° down, 60°/sec adjustable from ground. On
90° tilt down with the 16mm Zeiss lens the front glass is 1′ above ground. The
maximum height above ground is 300′ (100m) to respect FAA safety separation
with general aviation. Flying-Cam provides the wireless Body-Pan® proprietary
system: the camera operator has the option to take control of the tail rotor giving
an unlimited unobstructed, gyro-stabilized 360° pan capability. The pilot has a
transparent control and override authority if needed.
The Flying Platform has a top speed of 75 mph (120 kph) and can combine all
the moves of a full-size helicopter. Take off weight of 30 lbs (15kg) reduces
down wash to a minimum. Autorotation capability is enhanced by high rotor
inertia. Tail rotor is overpowered for Body-Pan operation. The flight is
controlled by a Flying-Cam licensed Pilot using a computer-base radiodensity.
Density altitude is affected by temperature, humidity and pressure. The radio
range is more than one mile. The maximum flight altitude is 14,000. The
practical range is the visual one. Pilot-in-relay operations are optional.
SUSPENDED CAMERA SYSTEM
SPYDERCAM
Spydercam is a suspended camera system that allows filmmakers to move
their lens in safe, precise, multi-axis paths through space by combining highly
specialized rigging components with computerized winch systems and an
experienced crew.
• Speeds in excess of 70 mph
• Distances in excess of 4000′ long, 600′ high
• Compatible with most modern remote heads (Libra, Stab-C, Wescam,
Flir, SpaceCam, etc.)
• Video, HD, 16mm, 35mm, VistaVision, IMAX
• Configurations for one-, two- or three-dimensional envelopes
• Seamless integration with previz systems
• Live fly or preprogram movements interchangeably on any axis. (X, Y,
Z, pan, tilt, roll, focus, zoom)
• Small studio systems to giant outdoor runs
• Highly accurate (some 3-D systems are actually capable of double-
pass motion control work)
• Modular, portable.
The system offers greater accuracy and allows closer proximity to sets, actors
and crew than a helicopter can safely negotiate and offers a much greater speed
and range of motion than can be provided by a standard camera crane. It a
incorporates a 3-D GPS waypoint navigation and flight-control system which is
programmed through a computerized Ground Control Station (GCS).
CABLECAM
A scaleable, multi-axis camera and stunt platform for film and HD. Cablecam
scales its rigs in interior and exterior locations. Truss, track, ropeways, towers,
and crane attachments are modular.
Types of rigs include,
Basic: One-axis point to point.
Elevator Skate: Two-axis XZ travel.
Flying V: Two-axis XZ travel.
Traveling High Line: Three-axis XYZ travel using a gantry-like rail and rope
rig.
Dual High Line: XYZ travel using parallel high lines.
Multi-V: Three-axis XYZ travel using four attachment points to construction
cranes or high steel.
Mega-V: The upside down crane. XYZ flight of a 20′ vertically telescoping
pedestal. Attached to the bottom of the pedestal is a horizontal 12′
extension/retraction arm which pans, tilts, and rolls. The camera head of
choice is attached at one end of the arm.
Teathered Ballooncam: One-three-axis of aerial control over expansive
distances.
Speed range of systems: 0–70 mph. Length, width, and height of rigs: 100–
5,000′. Propulsion: electronic or hydraulic. Software: Kuper and custom.
Cablecam’s Kuper or custom, controlled servo-driven camera and stunt rigs
are modeled in Maya and Soft image and exported to a previz model. “Virtual”
Cablecam is flown in the previz milieu using teach and learn protocol.
Aerial platforms, balloons, and flying 3-D cranes support the Libra, Stab-C
and XR as well as the Cablecam LP (low profile) stabilized HD head.
Lightweight, mobile operating stations. Repeatable motion control of both
hydraulic and electric servos. Noise-free communications over long distances.
Custom joystick control. Interfaces with other motion control systems for control
of steppers, servos, relays, lighting, cameras and camera heads. Remote
monitoring of servo drive functions. Mechanical and electronic braking.
Previsualization
by Colin Green
Asset Assembly
Working from a script, treatment, story outline or storyboards, key scene
elements comprising the main visual subject matter of the sequence are
constructed accurately in 3-D in the computer. The elements can include
characters, sets, vehicles, creatures, props—anything that appears in the
sequence being previsualized. The assets are ideally sourced through the Art
Department (e.g., as concept art, from storyboards or from physical or digital
models) and have already been approved by key creatives on the project. If
specific assets, such as locations, are still being determined, best approximations
are made.
Sequence Animation
The assets are laid out in space and time using 3-D software to create an
animated telling of the story’s action. The resulting previz sequence is reviewed
by key creative personnel and is further tweaked until accurate and approved.
Virtual Shoot
Virtual cameras are placed in the scene. Previz artists ensure that these
cameras are calibrated to specific physical camera and lens characteristics,
including camera format and aspect ratio. This step allows the animated scene to
be “seen” as it would from actual shooting cameras. These views can be based
on storyboard imagery or requests from the director or cinematographer. It is
relatively easy to generate new camera angles of the same scene or produce
additional coverage without much extra effort.
Sequence Cutting
As previz shots/sequences are completed, they can be cut together to reflect a
more cinematic flow of action, and are more useful as the entirety of the film is
evaluated and planned. Changes required for previz sequences often are
identified in the editing process.
by Robert C. Hummel III Updated from the April 2008 issue of American
Cinematographer magazine
Screen Plane: The position in a theater where the projection surface is located;
a vertical plane coincident with the screen that helps define where objects
appear in front of, behind, or on the screen.
Plane of Convergence: The vertical plane where your eyes are directed to
converge on a 3-D object. If an object appears to be floating in front of the
movie screen, the plane of convergence is where that object appears to be.
The same would apply to objects appearing to be “behind” the screen.
Proscenium Arch: For our purposes, this refers to the edge of the screen when
an object becomes occluded.
Interocular (also called Interaxial): The distance between your eyes (Figure 3)
is properly referred to as interocular. In 3-D cinematography, the distance
between the taking lenses is properly called interaxial; however, more
recently, you will find filmmakers incorrectly referring to the distance
between taking lenses as the interocular. In 3-D cinematography, if done
properly, the interaxial distance between the taking lenses needs to be
calculated on a shot by shot basis. The interaxial distance between the taking
lenses must take into account an average of the viewing conditions in which
the motion picture will be screened. For large screen presentations, the
distance is often much less than the distance between an average set of
human eyes. Within reason, the interaxial can be altered to exaggerate or
minimize the 3-D effect.
The 3-D cinematographer must weigh several factors when determining the
appropriate interaxial for a shot. They are: focal length of taking lenses, average
screen size for how the movie will be projected, continuity with the next shot in
the final edit, and whether it will be necessary to have a dynamic interaxial that
will change during the shot.
Because the interaxial distances are crafted for a specific theatrical
presentation, a 3-D motion picture doesn’t easily drop into a smaller home
viewing environment. A movie usually will require adaptation and modification
of the interaxial distances in order to work effectively in a small home theater
display screen environment.
The facts presented in this chapter are indisputable, but once you become
enmeshed in the world of 3-D, you will encounter many differing opinions on
the appropriate ways to photograph and project a 3-D image. For example, when
you’re originating images for large-format 3-D presentations (IMAX, Iwerks,
etc.), some people will direct you to photograph images in ways that differ from
the methods used for 1.85:1 or 2.40:1 presentations. Part of this is due to the
requirements for creating stereoscopic illusions in a large-screen (rather than
small-screen) environment, but approaches also derive from personal
preferences.
Many think stereoscopic cinematography involves merely adding an
additional camera to mimic the left-eye/right-eye way we see the world, and
everything else about the image-making process remains the same. If that were
the case, this chapter wouldn’t be necessary.
First off, “3-D” movies are not actually three-dimensional. 3-D movies hinge
on visual cues to your brain that trigger depth stimuli, which in turn create an
illusion resembling our 3-D depth perception. In a theatrical environment, this is
achieved by simultaneously projecting images that represent, respectively, the
left-eye and right-eye points of view. Through the use of glasses worn by the
audience, th e left eye sees only the left-eye images, and the right eye sees only
the right-eye images. (At the end of this chapter is a list of the types of 3-D
glasses and projection techniques that currently exist.)
Most people believe depth perception is only created by the use of our eyes.
This is only partially correct. As human beings, our left-eye/right-eye
stereoscopic depth perception ends somewhere between 19″ to 23″ (6 to 7
meters). Beyond that, where stereoscopic depth perception ends, monocular
depth perception kicks in.
Monocular depth perception is an acquired knowledge you gain gradually as a
child. For example, when an object gets larger, you soon learn it is getting closer,
and when you lean left to right, objects closer to you move side to side more
quickly than distant objects. Monocular depth perception is what allows you
catch a ball, for example.
3-D movies create visual depth cues based on where left-eye/right-eye images
are placed on the screen. When you want an object to appear that it exists
coincident with the screen plane, both left- and right-eye images are projected on
the same location on the screen. When photographing such a scene, the
cinematographer determines the apparent distance of the screen plane to the
audience by the width of the field of view, as dictated by the focal length of the
chosen lenses. For example, a wide landscape vista might create a screen-plane
distance that appears to be 40 from the audience, whereas a tight close-up might
make the screen appear to be 2′ from the audience. Figure 4 illustrates when an
object is at the screen plane and where the audience’s eyes converge while
viewing that object. (Figure 4 also effectively represents where your eyes
converge and focus when watching a standard 2-D movie without special
glasses).
Figure 4. Eyes converging on an “on screen” object. As seen
from above, as if viewing from the theater’s ceiling, looking
down on the audience and the screen plane.
Figure 5. How a behind screen object is created.
This doesn’t mean the 3-D approach is “wrong;” it’s just an example of why
3-D depth cues in a 3-D movie often seem to be exaggerated—why 3-D movies
seem to be more 3-D than reality.
When an object appears on the screen plane, every member of the audience
sees the object at the same location on the screen because the left- and right-eye
images appear precisely laid on top of each other (and thus appear as one
image). Basically, the image appears the same as it would during a regular “2-D”
movie projection (Figure 8).
Take a look at Figure 9, however, and see how things change when an object
is placed behind the screen plane. As you can see, a person’s specific location in
the theater will affect his perception of where that behind-screen object is
located. Also, how close that person is to the screen will affect how far behind
the screen that object appears to be; the closer one’s seat is to the screen, the
shorter the distance between the screen and the object “behind” it appears to be.
Again, it is not “wrong” that this happens. Figure 9 simply clarifies the point
that stereoscopic cinematography is not 3-D. Were it truly 3-D, every audience
member would see these behind-screen objects in the same location. When
planning shots for a 3-D motion picture, the filmmaker should be conscious of
how a dramatic moment might be received by viewers seated in various
locations. Audience position will also affect the perceived location of off screen
objects as well.
Glasses that employ circular polarization, with a casual observation, look just
like their linear cousins in Figure 10, but perform quite differently. A simple
explanation is that circular polarization can be oriented in clockwise and
counterclockwise directions. The left- and right-eye projected images are
polarized in opposing circular-polarization orientation, as are the respective
lenses on the glasses. The circular-polarization effect is graphically illustrated in
Figure 12.
A principle advantage of circular polarization is that the integrity of the left-
and right-eye image information is always maintained, no matter how much the
viewer tilts their head. Circular polarization is the technique employed with the
Real D “Z Screen” technique, used in conjunction with single-projector digital-
cinema projection.
A variant on the passive-glasses front is Dolby 3-D technology (Figure 13).
Developed in conjunction with Infitec GmbH, this system can use existing
screen installations, as it doesn’t require a silver screen. By shifting about half a
dozen different RGB “primaries” of the projected image, the process is able to
create separation of the left and right eye images. Dolby’s glasses then filter
these spectrums of light so each eye only sees its respective left and right image
information. Dolby’s color-spectrum technique could technically be described as
anaglyph; yet it is actually a much more complex system that doesn’t cause any
of the color distortion usually associated with anaglyph.
Another type of passive glasses is the anaglyph system (Figure 14). The
separation of left- and right-eye imagery is achieved by representing the left-eye
material with the cyan spectrum of light and the right-eye material with the red
spectrum. When the glasses are worn, those colors are directed to your left and
right eyes, respectively. This system is often mistakenly thought to be how all 3-
D films were viewed in the 1950s; actually, this method was not used very often,
and almost all films in the 1950s utilized polarized projection and glasses. Over
the past few years, it has been revived for some 3-D presentations, such as Spy
Kids 3-D. This technique works on a standard white screen and does not require
a silver screen.
Active glasses, also called “shuttered glasses,” employ an electronically
triggered liquid-crystal shutter in each lens (Figure 14). This method alternately
synchronizes the left- and right-eye projected images with the respective liquid-
crystal shutters contained in the glasses. An advantage of active glasses is that
they do not require a silver screen. The glasses are electronic, require battery
power to function, and usually operate in conjunction with a transmitted infrared
signal that synchronizes the glasses with the projected left- and right-eye images.
Pulfrich glasses (Figure 16) are definitely the poor man’s choice for 3-D,
because they do not even produce true stereoscopic 3-D. The “Pulfrich Effect”
was first documented by Carl Pulfrich in 1922. This passive eyewear has one
clear lens and one lens that is significantly darker. It operates on the principle
that it takes longer for your brain to process a dark image than a bright image.
The resultant delay creates a faux 3-D effect when objects move laterally across
the field of view. Objects “appear” 2-D until they move laterally across the
frame, which causes an optical illusion that the brain interprets as a 3-D depth
cue. This technique has usually been limited to broadcast-TV applications such
as the short 3-D sequences in a 1990 Rolling Stones concert broadcast on Fox,
and Hanna-Barbera’s Yo Yogi! 3-D.
Is there more to 3-D production than the ground we’ve covered? You bet. But
this chapter should gird you with a foundational understanding of the medium,
allow you to converse in the vocabulary of 3-D, and enable you to begin to make
the medium your own. I also hope you now appreciate that the complexities of
3-D stereoscopic cinematography should not be underestimated.
Day-for-Night, Infrared and Ultraviolet
Cinematography
Black-and-White Films
For pictorial purposes, the greatest use of infrared-sensitive film for motion-
picture photography has been for “day for night” effects. Foliage and grass
reflect infrared and record as white on black-and-white film. Painted materials
that visually match in color but do not have a high infrared reflectance will
appear dark. Skies are rendered almost black, clouds and snow are white,
shadows are dark but often show considerable detail. Faces require special
makeup and clothing can only be judged by testing.
A suggested EI for testing prior to production is daylight EI 50, tungsten EI
125 with a Wratten 25, 29, 70 or 89 filter, or daylight EI 25, tungsten EI 64 with
87 or 88A (visually opaque) filter. Infrared light comes to a focus farther from
the lens than does visual light. (Speak to your lens supplier for correct focus
compensation for infrared photography.)
Color
No human can see infrared; color film can only record and interpret it. Kodak
Ektachrome Infrared Film 2236 was originally devised for camouflage detection.
Its three image layers are sensitized to green, red and infrared instead of blue,
green and red. Later applications were found in medicine, ecology, plant
pathology, hydrology, geology and archeology. Its only pictorial use has been to
produce weird color effects.
In use, all blue light is filtered out with a Wratten 12 filter; visible green
records as blue, visible red as green, and infrared as red. The blue, being filtered
out, is black on the reversal color film. Because visible yellow light is used as
well as infrared, focus is normal, and the use of a light meter is normal for this
part of the spectrum. What happens to the infrared reflected light is not
measurable by conventional methods, so testing is advisable. A suggested EI for
testing prior to production is daylight EI 100 with a Wratten 12 filter.
ULTRAVIOLET CINEMATOGRAPHY
There are two distinctly different techniques for cinematography using
ultraviolet radiation, and since they are often confused with each other, both will
be described.
In the first technique, called reflected-ultraviolet photography, the exposure is
made by invisible ultraviolet radiation reflected from an object. This method is
similar to conventional photography, in which you photograph light reflected
from the subject. To take pictures by reflected ultraviolet, most conventional
films can be used, but the camera lens must be covered with a filter, such as the
Wratten 18A, that transmits the invisible ultraviolet and allows no visible light to
reach the film. This is true ultraviolet photography; it is used principally to show
details otherwise invisible in scientific and technical photography. Reflected-
ultraviolet photography has almost no application for motion-picture purposes; if
you have questions about reflected-ultraviolet photography, information is given
in the book “Ultraviolet and Fluorescence Photography,” available from Eastman
Kodak.
The second technique is known as fluorescence, or black-light, photography.
In motion-picture photography, it is used principally for visual effects. Certain
objects, when subjected to invisible ultraviolet light, will give off visible
radiation called fluorescence, which can be photographed with conventional
film. Some objects fluoresce particularly well and are described as being
fluorescent. They can be obtained in various forms such as inks, paints, crayons,
papers, cloth and some rocks. Some plastic items, bright-colored articles of
clothing and cosmetics are also typical objects that may fluoresce. For objects
that don’t fluoresce, fluorescent paints (oil or water base), chalks or crayons can
be added. These materials are sold by art-supply stores, craft shops, department
stores and hardware stores. (Many of these items can also be obtained from
Wildfire Inc., 10853 Venice Blvd., Los Angeles, CA, 90034, which
manufactures them specially for the motion-picture industry.)
Fluorescence may range from violet to red, depending on the material and the
film used. In addition to the fluorescence, the object reflects ultraviolet light,
which is stronger photographically. Most film has considerable sensitivity to
ultraviolet, which would overexpose and wash out the image from the weaker
visible fluorescence. Therefore, to photograph only the fluorescence, you must
use a filter over the camera lens (such as the Wratten 2B, 2E or 3, or equivalent)
to absorb the ultraviolet.
The wavelengths of ultraviolet light range from about 10 to 400nm. Of the
generally useful range of ultraviolet radiation, the most common is the long-
wavelength 320 to 400nm range. Less common is the short to medium-
wavelength range of 200 to 320nm. In fluorescence photography you can use
long-, medium-, or short-wave radiation to excite the visible fluorescence
depending on the material. Some materials will fluoresce in one type of
ultraviolet radiation and not in another.
Certain precautions are necessary when you use ultraviolet radiation. You
must use a source of short- or medium-wave ultraviolet with caution because its
rays cause sunburn and severe, painful injuries to eyes not protected by
ultraviolet-absorbing goggles. Read the manufacturer’s instructions before using
ultraviolet lamps.
Eye protection is generally not necessary when you use long-wave ultraviolet
because this radiation is considered harmless. However, it’s best not to look
directly at the radiation source for any length of time, because the fluids in your
eyes will fluoresce and cause some discomfort. Wearing glass eyeglasses will
minimize the discomfort from long-wave sources.
There are many sources of ultraviolet radiation, but not all of them are suitable
for fluorescence photography. The best ultraviolet sources for the fluorescence
technique are mercury-vapor lamps or ultraviolet fluorescent tubes. If an object
fluoresces under a continuous ultraviolet source, you can see the fluorescence
while you’re photographing it.
Since the brightness of the fluorescence is relatively low, the ultraviolet source
must be positioned as close as practical to the subject. The objective is to
produce the maximum fluorescence while providing even illumination over the
area to be photographed.
Fluorescent tubes designed especially to emit long-wave ultraviolet are often
called black-light tubes because they look black or dark blue before they’re
lighted. The glass of the tubes contains filter material that is opaque to most
visible light but freely transmits long-wavelength ultraviolet. These tubes,
identified by the letters BLB, are sold by electrical supply stores, hardware
stores and department stores. They are available in lengths up to 4 and can be
used in standard fluorescent fixtures to illuminate large areas. Aluminum-foil
reflectors are available to reflect and control the light.
Mercury-vapor lamps are particularly suitable for illuminating small areas
with high ultraviolet brightness. When these lamps are designed for ultraviolet
work, they usually include special filters which transmit ultraviolet and absorb
most of the visible light. Mercury-vapor ultraviolet lamps are available in two
types, long-wave and short-wave. Some lamps include both wavelengths in the
same unit so that they can be used either separately or together. If you use a light
source that does not have a built-in ultraviolet filter, you must put such a filter
over the light source. The filter for the radiation source is called the “exciter
filter.”
You can use a Kodak Wratten Ultraviolet Filter No. 18A or Corning Glass No.
5840 (Filter No. CS7-60) or No. 9863 (Filter No. CS7-54) for this purpose.
Kodak Filter No. 18A is available in 2- and 3-inch glass squares from photo
dealers. The dealer may have to order the filter for you. The Corning Glass is
available in larger sizes from Corning Glass Works, Optical Photo Products
Department, Corning, NY, 14830. The filter you use must be large enough to
completely cover the front of the lamp. The scene is photographed on a dark set
with only the ultraviolet source illuminating the subject. In order for the film to
record only the fluorescence, use a Kodak Wratten gelatin filter, No. 2A or 2B,
or an equivalent filter, over the camera lens to absorb the ultraviolet. When used
for this purpose, the filters are called “barrier filters.” Since the fluorescence
image is visible, no focusing corrections are necessary. Focus the camera the
same as for a conventional subject.
Determining Exposure
Many exposure meters are not sensitive enough to determine exposure for
fluorescence. An extremely sensitive exposure meter should indicate proper
exposure of objects that fluoresce brightly under intense ultraviolet, if you make
the meter reading with a No. 2A or 2B filter over the meter cell. If your exposure
meter is not sensitive enough to respond to the relative brightness of
fluorescence, the most practical method of determining exposure is to make
exposure tests using the same type of film, filters and setup you plan to use for
your fluorescence photography.
Films
While either black-and-white or color camera films can be used for
fluorescence photography, color film produces the most dramatic results. The
daylight-balanced films will accentuate the reds and yellows, while the tungsten
balanced films will accentuate the blues. Since fluorescence produces a
relatively low light level for photography, a high-speed film such as Eastman
Vision 3 500T (5219) or Eastman Vision 250D (5207) is recommended.
SPECIAL CONSIDERATIONS
Some lenses and filters will also fluoresce under ultraviolet radiation. Hold the
lens or filter close to the ultraviolet lamp to look for fluorescence. Fluorescence
of the lens or filter will cause a general veiling or fog in your pictures. In severe
cases, the fog completely obscures the image. If a lens or filter fluoresces, you
can still use it for fluorescence photography if you put the recommended
ultraviolet-absorbing filter over the camera lens or the filter that fluoresces. It
also helps to position the ultraviolet lamp or use a matte box to prevent the
ultraviolet radiation from striking the lens or filter.
Aerial Cinematography
by Jon Kranhouse
Pilots
A truly qualified pilot is critical for both the safety and success of the
production; it is obviously essential for the pilot to have many hours of “time-in-
type” of similar aircraft. When filming in the United States, a pilot should be
operating under his or her own (or company’s) FAA Motion Picture Manual. This
allows a pilot some latitude within the FAA guidelines, which restrict the
activities of all other aircraft. A high level of expertise and judgment must be
demonstrated before the FAA grants such manuals. Of course, many flying
situations still require advance approval from the regional FAA office, as well as
other local authorities. Preproduction consultation with pilots is strongly
recommended.
Remotely operated gyrostabilized camera systems are often called upon for
very close work with long lenses; precise camera positioning is absolutely
critical. Few pilots have the skills necessary to hold a helicopter in a steady-
enough hover for a shot with a tight frame size. While the gyro systems isolate
the camera from helicopter vibration, an unstable hover causes XYZ spatial
excursions, resulting in constant relocations of the film-plane—an undesirable
parallax shift. The footage may appear as if the camera is panning or tilting;
actually, the helicopter is wobbling about in the hover position.
A local pilot and helicopter may be the only option when on very remote
locations. Some regional helicopter companies may not allow other pilots to fly
their helicopter. These local pilots must understand that some filming maneuvers
might require an exceptional degree of mechanical reliability from their aircraft;
when hovering below 300 feet in calm air, a safe autorotate is impossible. Spend
the minimum amount of time hovering low, and don’t press your luck with
unnecessary takes. If you must work with a helicopter pilot who has no film
experience, try to choose one with many hours of long-line work (i.e., heavy
cargo suspended by cable, or water-bucket fire suppression), as this type of
flying requires both a high level of aviation skill and judgment.
Fuel Trucks
Helicopter agility suffers greatly as payload increases; having a fuel truck on
location allows fuel weight to be kept to a safe minimum. Not all airports have
jet fuel, which is required for the jet-turbine helicopters listed below. Not all fuel
trucks are permitted to leave airports and travel on public roads.
Weather Basics
Aircraft performance suffers when air becomes less dense, which happens
when heat and/or altitude is increased. Helicopters are most stable when moving
forward faster than 20 mph air speed (translational lift). Therefore, steady
breezes can be beneficial, allowing the helicopter to hover relative to the ground
while actually flying forward in the airstream. Conversely, a strong, steady wind
can make it impossible for a helicopter to maintain a hover in a crosswind or
downwind situation. The types of helicopters used for filmming are rarely
intrumented to fly in heavy fog or clouds; they operate under Visual Flight Rules
(VFR).
Max Speed
When an aircraft has any kind of exterior modification for camera mounts
(i.e., door removed, nose/belly mounts or gyro mounts), a maximum allowable
air speed known as VNE or “Velocity Not to Exceed” is determined through
FAA certification. Be sure this VNE speed will match your subject. Air speed
should not be confused with ground speed; prevailing wind conditions may help
or hinder filming logistics.
Safety
See the related Industry-Wide Labor-Management Safety Bulletins at:
http://www.csatf.org/bulletintro.shtml.
GLOSSARY OF FLIGHT PHYSICS
Awareness of these concepts, most of which are related to helicopters, will
expedite communication and increase safety.
AGL: Above Ground Level altitude, expressed in feet or meters.
Flight Envelope: Published by the aircraft manufacturer, this term refers to the
conditions in which a given aircraft can be safely operated. This takes into
account air density (altitude, humidity and temperature) and allowable
torque and temperature output of the engine/transmission/turbine.
Ground Effect: Condition of improved lift when flying near the ground. With
helicopters, this is roughly within 1⁄2 rotor diameter, becoming more
pronounced as the ground is approached.
Losing Tail-Rotor Authority: When helicopters attempt to hover or crab in a
crosswind, especially when heavily loaded in thin air (close to the edge of
their flight envelope), it sometimes happens that not enough power can be
diverted to the tail rotor to maintain the sideways position relative to the
wind direction. When the rudder pedal is maxed out, a wind gust can spin
the helicopter on its mast like a weather vane. This causes an abrupt loss of
altitude. If shot choreography requires such a position, have plenty of
altitude above ground level for recovery.
Rudder Pedals: Cause an aircraft to yaw; if using a nose mount, the same as
panning. Airplanes have a vertical rudder, though single-rotor helicopters
use a tail rotor to control yaw. Helicopters change the pitch of the tail rotor
blades to alter the force of the tail rotor, which counteracts the torque from
the main rotor.
Settling with Power: When helicopters hover out of ground effect, but still near
the ground in still air, prop wash can bounce off the ground and wrap back
around the top of the helicopter as a cyclonic down-draft. A pilot can be
fooled into applying more horsepower, which only increases the down-draft
intensity. All pilots know of this phenomenon, but those who work
infrequently in low-AGL situations may not be on guard. Try to change shot
choreography to continuously move out of your own “bad” air.
Yoke/Control Stick: Makes the aircraft pitch and roll (stick forward = nose
down, stick right = roll right). When using nose-mounts, coordinate with the
pilot so aircraft pitch and camera tilts are harmonious.
COMMON HELICOPTERS
Aerospatiale A-Star/Twin-Star (AS350 and AS355 twin)
Pro: Very powerful engine(s) for superior passenger and fuel-range capacity.
Extremely smooth flying with three-blade system. Accepts door and exterior
gyro-stabilized mounts (gyro mounts require helicopter to have aft “hard
points”). Tyler Nose Mount can fit if the aircraft is equipped with custom
bracket by the helicopter owner. High-altitude ability is excellent.
Con: Costlier than Jet Ranger. Does not hold critical position well in ground-
effect hover.
Aerospatiale Llama
Pro: Powerful engine mated to lightweight fuselage specifically designed for
great lifting ability and high-altitude work. Accepts door and some gyro-
stabilized mounts. Holds position quite well in ground-effect hover. Set the
helicopter height record in 1972 at over 40,000′.
Con: Expensive; very few available.
Steadicam
Using a short center post between camera and sled and attached to an an
appropriate vehicle mount (not worn), good results can be achieved as long as
the camera is kept sheltered from the wind. Best for onboard a story aircraft,
rather than as a substitute for a door mount.
Limitations: Wind pressure striking the camera causes gross instability. 337
inspections required.
Flat Port
The flat port is unable to correct for the distortion produced by the differences
between the indexes of light refraction in air and water. Using a flat port
introduces a number of aberrations when used underwater. They are:
Refraction: This is the bending of light waves as they pass through different
mediums of optical density (the air inside the camera housing and the water
outside the lens port). Light is refracted 25 percent, causing the lens to
undergo the same magnification you would see through a facemask. The
focal length of your lens also increases by approximately 25 percent.
Radial Distortion: Because flat ports do not distort light rays equally, they have
a progressive radial distortion that becomes more obvious as wider lenses are
used. The effect is a progressive blur, that increases with large apertures on
wide lenses. Light rays passing through the center of the port are not affected
because their direction of travel is at right angles to the water-air interface of
the port.
Chromatic Aberration: White light, when refracted, is separated into the color
spectrum. The component colors of white light do not travel at the same
speed, and light rays passing from water to glass to air will be unequally
bent. When light separates into its component colors, the different colors
slightly overlap, causing a loss of sharpness and color saturation, which is
more noticeable with wider lenses.
Dome Port
The dome port is a concentric lens that acts as an additional optical element to
the camera lens. The dome port significantly reduces the problems of refraction,
radial distortion and axial and chromatic aberrations when the curvature of the
dome’s inside radius center is placed as close as possible to the nodal point of
the lens. When a dome port is used, all the rays of light pass through unrefracted,
which allows the “in-air” lens to retain its angle of view. Optically a “virtual
image” is created inches in front of the lens. To photograph a subject underwater
with a dome port you must focus the lens on the “virtual image”, not the subject
itself. The dome port makes the footage marks on the lens totally inaccurate for
underwater focus. Therefore lenses should be calibrated underwater. The dome
port offers no special optics above water and functions as a clear window.
UNDERWATER LENS CALIBRATION
To guarantee a sharp image when using either the flat or dome ports, it is best
to calibrate the lenses underwater by placing a focus chart in a swimming pool or
tank. Even on location, the hotel pool will offer a lot more control than an ocean
or lake. Set the camera housing on a tripod and hang a focus chart on a C-stand.
If possible, calibrate your lenses at night or in an indoor pool or tank. Crosslight
the focus chart with two 1200 watt HMI’S, 2′ in front at a 45-degree angle to the
chart. Starting at 2, tape measure the distance from the underwater housing’s
film plane to the focus chart. Eye focus the lens and mark the housing’s white
focus knob data ring with a pencil. Slide the camera back to continue the same
process at 3′, 4′, 5′, 6′, 8′, 10′, 12′, and 14′. This should be done for all lenses.
Once a lens has been calibrated, you must establish reference marks between the
lens and data ring so that you can accurately sync up for underwater focus when
lenses are changed during the shoot. After marking the data rings underwater
with pencil, go over the calibration marks with a fine point permanent marker on
the surface.
Dome Port
To calibrate a lens with a dome port, use this basic formula to determine the
starting point for underwater focus: Simply multiply the inside radius of the
dome by four. That number will be the approximate distance in inches from the
film plane that the lens should be set on as a starting point for underwater eye
focus calibration. The most commonly used dome radius is 4′. Multiply the 4′
dome radius times 4. That gives you a measurement of 16′ at which to set your
lens in the housing to begin calibration for underwater photography. If a lens
cannot focus close enough to take advantage of the dome port, use a plus diopter
to shift the lens focus back. Ultimately, your lens should be able to focus down
to at least 10′ to be able to follow focus from 1′ to underwater infinity. When
using most anamorphic lenses with the dome port, you will have to add a +3
diopter to the lens to shift close focus back in order to focus on the aerial image.
Flat port
The refractive effect of the 25 percent magnification produces an apparent
shift of the subject towards the camera, to 3⁄4 its true distance. As a general rule,
for flat port underwater photography, if you measured your camera to subject
distance at 4′, you would set your lens at 3⁄4 the distance, or around 3. For critical
focus, especially on longer lenses and when shooting in low light, underwater
eye focus calibration is recommended. Shooting through a window or port of a
tank or pool is the same as using a flat port. If you do shoot through a tank
window and want to minimize distortion, the camera lens axis must be kept at
90° to the window’s surface. Camera moves will be limited to dollying and
booming to keep the lens perpendicular to the window’s surface. Panning and
tilting should not be done unless a distortive effect is desired.
LENS SELECTION
Wider lenses are usually the choice in underwater photography because they
eliminate much of the water column between the camera and subject to produce
the clearest, sharpest image. This is especially important in low visibility
situations. Longer lenses, which are being used more and more underwater,
usually will not focus close enough to take advantage of the dome port’s “virtual
image” (you need to focus down to 12) and a diopter will have to be added to the
lens to shift the focus back, or you can switch to a flat port and allow for the 25
percent magnification. Most Zeiss and Cooke lenses between 10mm and 35mm
can close focus on the dome ports “virtual image” without using diopters. When
using Panavision spherical lenses, the close focus series of lenses allow the use
of the dome port without using diopters. The commonly used anamorphic lenses
for dome port cinematography are the Panavision “C” Series 30mm, 35mm and
40mm focal lengths. Because these anamorphic lenses only close focus to
approximately 30′, you will have to use a +3 diopter to shift minimum focus
back to 12′ to 14′ to focus on the “virtual image” created by the dome port.
When using longer lenses for spherical or anamorphic close-ups with the flat
port, I set the lens at minimum focus and move the camera back and forth on the
subject or a slate to find critical focus instead of racking the focus knob.
FILTERS
Filtering for film underwater, as in above water applications, is used to lower
light levels, correct color balance and improve image contrast. Aside from the
standard 85 filter used to correct tungsten film for daylight exposure, the use of
color correction filters can be a very subjective issue. Water’s natural ability to
remove reds, oranges and other warm colors can be overcome by using
underwater color correction filters. These filters alter the spectrum of light
reaching the camera to reproduce accurate skin tones and the true colors of the
sea. Because all water is a continuous filter, the deeper you go beneath the
surface the more colors are filtered out. Also, the distance between the camera
and subject must be added to the depth of the water to determine the correct
underwater filter distance. UR/PRO™ and Tiffen AQUACOLOR® filters both
provide color correction for underwater filming, and both manufacturers offer
detailed information on the use of their filters. Polarizing filters can improve
contrast where backscatter from artificial light is a problem but sunlight
illumination underwater is not sufficiently polarized and makes polarizing filters
ineffective.
EXPOSURE METERS
Both incident and reflected light meters can be used for reading your exposure
underwater. In ambient light situations, the reflected meter is usually used
because it measures the light reflected from the subject through the water
column.
The most commonly used underwater light meters are the Sekonic L-164
Marine Meter and the Ikelite digital exposure meter. The Sekonic L-164,
designed in 1969 specifically for underwater photography, was discontinued in
1993 but then reintroduced in 1998 because of demand. It is self-contained,
needs no special housing, and is pretty straightforward in operation. The Sekonic
L-164 has a reflected 30-degree acceptance angle and gives very reliable
underwater exposures. The Ikelite digital is also a self-contained underwater
meter and can measure incident or reflected light. It has a 10-degree acceptance
angle in the reflected light mode. Both the Sekonic L-164 and the Ikelite digital
meters can be used above water or down to 200′ underwater. There are also
commercially available underwater cases for the Spectra IV, Minolta IV, Minolta
1° Spot and the Minolta Color Temperature.
UNDERWATER COMMUNICATIONS
Communication between the director/AD and cameraperson and crew is
critical. We’ve certainly come a long way since the days of banging two pieces
of pipe together, so depending on your budget and shot list, there are different
degrees of technology available.
A simple but effective means of communications between the cameraperson
and the above water director/AD is a one-way underwater P.A. speaker system.
Used in conjunction with video assist from the underwater camera, cues for roll,
cut and talent performance can be instigated by the director on the surface. The
cameraperson can also answer questions via the video assist with hand signals or
a slate presented to the camera lens or by nodding the underwater camera for
“yes” or “no”.
For more complex set-ups where the cinematographer and the director need to
talk directly to each other, or when the director works underwater with the crew,
two-way communication is very efficient. While wearing full-face masks
equipped with wireless communication systems, the underwater
cinematographer can converse with the director and also talk to the underwater
and surface crews via underwater and above water speakers.
LIGHTING
Specifics on lighting underwater, as in above water, will vary from
cinematographer to cinematographer, and the following information is merely a
guide. Experimenting and developing your own technique should be your goal.
Today’s underwater cinematographer has a wide variety of HMI, incandescent,
fluorescent and ultraviolet lights and accessories from which to choose.
Underwater lights designed for motion picture work should have the ability to
control the light output by means of diffusion glass, filters, scrims, snoots and
barndoors. Always check the condition of lampheads, connectors, plugs and
cables before use. Become familiar with the lamp’s lighting system and be able
to perform simple repairs and maintenance.
Unlike air, even in the best water, the clarity is significantly reduced in
distances as little as 10″ to 20″. Artificial lighting is often needed to adjust light
levels for exposure and vision, as well as to modify the color balance and image
contrast of the subject. By incorporating supplemental lighting, the light’s water
path, from lamp head to subject to camera, can be kept relatively short, and the
selective color absorption properties of water will be much less apparent than if
the light had to originate at the water’s surface.
HMIs light has a higher color temperature, approximately 5600 K (longer
light wavelengths), and thus penetrates further, providing more illumination over
a wider area.
The reflection of artificial light off suspended particles in the water is known
as “back scattering”. The effect is much like driving your car with your
headlights on in heavy fog or a snowstorm. In addition to back scattering there is
also side scattering and in some instances even front scattering. Light scattering
can be greatly reduced if you separate the light source from the camera and by
using multiple moderately intense lamps rather than using a single high intensity
lamp. It is advisable to keep the light at a 45-degree to 90-degree angle to the
lens axis of the camera. Generating a sharp light beam to help control the light
and further reduce backscatter is done with the aid of reflectors, barndoors or
snoots.
In addition to the basic effect of light intensity reduction in water due to
absorption, the matter is further complicated by the fact that absorption is a
function of color. Red light is absorbed approximately five times faster than
blue-green light in water. This is why long distance underwater photographs are
simply a blue tint without much color.
Fill lighting underwater on a moving subject is best accomplished by having
the lamp handheld by the underwater gaffer. With the lamp being handheld, the
underwater gaffer can maintain a constant distance between the lamp and the
moving subject, keeping the exposure ratio constant. In a more controlled
situation where the subject is confined to an area of a set, light can be bounced
off a white griffolyn stretched on a frame or through a silk on a frame.
BLUESCREEN AND GREENSCREEN
There are different schools of thought on which color screen is best to use but
both bluescreen and greenscreen have been used successfully underwater for
many years. Backlighting a bluescreen underwater has been done successfully,
but when the screen material is exposed to chlorine, it sometimes picks up
density and requires more and more light to maintain its exposure reading. Front
lighting fabric screens underwater has been very successful, and depending on
the size of the screen, the logistics of the location and the budget, there are a
variety of underwater lights to choose from. Underwater lamps also can be
augmented with above water lamps. It is best to mount the screen on a movable
frame, so it can be pulled out of the water and hosed off after every day of
shooting. Chlorine bleaches the screen quickly, and rotating the screen 180°
every day helps minimizes the color difference. From the bottom to the top, the
frame should tilt back approximately 10°. Extending the screen above the
surface of the water, even if only a few inches is very important. By extending
the screen above the water, it allows the under surface of the water to act as a
mirror and reflect the screens color on the surface. This extends the screens
coverage and allows the cameraperson to tilt up to follow the shot while using
the waters surface as an extension of the screen. It is also important to stop or at
least reduce the agitation of the water to prevent a visible ripple effect on the
screen.
LIGHTING SAFETY
From a safety standpoint, when using AC power in or near water or other
potentially wet locations, it is essential (and in most cases, mandatory) to use a
Class “A” UL approved GFCI (Ground Fault Circuit Interrupter) for actor and
film crew protection.
The purpose of a GFCI is to interrupt current flow to a circuit or load when
excessive fault current (or shock current) flows to ground or along any other
unintentional current return path. The standard level of current needed to trip a
“people” protection Class “A” GFCI is 5mA.
Class “A” GFCIs are designed for “people” protection. Other GFCIs are
designed for various levels of “equipment” and “distribution” protection. In
general, if you can’t readily see the Class “A” marking on a GFCI, the device is
probably not designed for “people” protection. To make sure that the GFCI being
used is a Class “A” device, Section 38 of UL 943 (the standard for GFCIs)
requires that the “Class” of the GFCI is clearly marked on the device in letters
that are clearly visible to the user.
Today, Class “A” GFCIs are readily available for loads up to 100 Amps,
single- and three-phase, for both 120v and 208/240v fixed voltage applications.
Certain special GFCIs can also operate on variable voltage power supplies
(behind dimmers). If the device’s label does not clearly state the working voltage
range of the unit, check with the device’s manufacturer before using the unit on
dimmers (or other variable voltage applications), since conventional GFCIs may
not operate correctly below their rated voltage.
Specialty GFCI manufacturers produce advanced GFCI devices that offer
other important safety features, such as monitoring predictive maintenance,
power supply phase and ground correctness.
Choose your GFCIs carefully, because if misapplied, these important safety
devices may unknowingly fail to function and render your installation with a
false sense of security.
Arctic and Tropical Cinematography
by Bill Hogan, Sprocket Digital and Steve Irwin, Playback Technologies, Inc.
Flat-panel displays
With display technology constantly evolving, there is now and will continue to
be a large variety of new and constantly changing image display devices. Each
one poses unique and technically challenging problems for successful
photography.
Plasma displays
Often misunderstood, this display technology is a unique mix of both CRT and
LCD science. The images are created by ionizing a gas which strikes a phosphor.
The images are not always scanned like a traditional CRT display, but addressed
pixel by pixel or row by row without regard for the input signal’s refresh rate.
This can create very annoying and different-looking artifacts from those seen
when shooting standard monitors. Some manufacturers and other companies
have taken steps to modify plasma panels that allow them to be used in film
productions and photographed at 30, 25 and 24 frames per second. When
presented with the task of shooting plasma screens, try to include them in a test
day to work out syncing, exposure and color-temperature issues.
DLP DISPLAYS AND PROJECTORS
This fairly recent technology used in a wide variety of video and data
projectors employs Digital Micromirror Devices (DMDs) as the fundamental
image-creating component. Thousands of tiny mirrors acting as individual pixel
or light-switching devices are fabricated using familiar semiconductor
technology. DLP projectors are capable of extremely high lumen outputs and can
deal with very high-resolution video and computer signals. Again, as in plasma
displays, the way individual pixels are addressed and the rate at which they are
turned on and off does not always relate exactly to the input signal’s frame rate.
Often, video and computer inputs are formatted or scan-converted onto part or
all of the micro mirrors, depending on the input signal’s resolution. When these
displays are photographed, various artifacts such as image flicker and density
breathing can occur. It is usually best to match frame rates between the DLP
projector’s input signal and your desired film camera speed. It is strongly
recommended to also shoot your own tests when dealing with new or different
projectors.
VIDEO WALLS AND LED DISPLAYS
Traditional video walls are based on either CRT monitors or CRT/LCD
projection engines. These monitors or “projection cubes” are stacked into the
desired size and shape and fed from a controller or processor that splits up the
input signals across the individual monitors. When faced with a video wall, find
out the underlying display technology and use the guidelines discussed earlier
for them. Recently introduced as a large venue display technology is the LED
video screen/wall. High brightness light-emitting diodes are used to create
individual pixels. These displays are made to be viewed at long distances and are
bright enough to be used outdoors in full sunlight. LED displays are now being
used in sports arenas, at rock concerts, and at other special event venues, and can
pose very unique problems to filming. With several manufacturers in the
marketplace all designing their own control electronics, special research and film
tests will be required to determine if they can be used practically.
COLOR TEMPERATURE
Once the frame rate and syncing issues have been worked out, the final
important aspect to be dealt with is color temperature.
Regardless of the display technology you are photographing, most CRT, LCD,
plasma and DLP devices are intrinsically daylight or near-daylight in color
temperature. Most consumer televisions will be at or near 6000-8000°Kelvin.
Broadcast and industrial video monitors should be close to 6500°K. Computer
CRT and LCD monitors vary greatly between 5000°K to 9000°K; some are even
as high as 12,000°K.
Since a majority of these devices will be photographed under tungsten
lighting, some method of color correction/compensation will be required for
proper grayscale reproduction. If an uncorrected display is photographed under
tungsten lighting, the image will take on a very strong blue tone. You can use
standard color temperature meters to determine the display’s normal color
temperature by measuring a known white area. This measurement can be used as
a starting point to help in adjusting the display’s internal color balance controls.
A majority of computer monitors and projectors are now equipped with menu
functions and presets to change the color temperature of the image. However,
these internal controls will not always allow the display to reach 3200°K. A
possible alternate solution is to gel the monitor or filter the projector, but these
methods are not always practical.
CONCLUSION
A majority of productions will hire a specialist to be responsible for playback
and/or projection. They will usually precorrect the playback material or display
device to correct for tungsten photography. The playback specialist will also
assist you in setting exposure and work with the 1st AC concerning film camera
sync equipment setup and operation.
There is a large amount of misinformation circulating in the industry, and it is
very difficult to keep up on all the display technologies as they evolve. Often,
manufacturers will update or introduce new products several times a year, and
what worked for you six months ago may not work the same way now. It cannot
be emphasized enough to test any new or different display device when
considering its use in your production.
It is important to note here that digital technology is not a panacea and is not a
substitute for good lighting and skilled cinematography. Although substantial
creative control is provided by the selective and localized tools in today’s color
correctors, you have to start with a sharp and well-exposed image or the final
results will be compromised.
Digital Intermediate also supports better picture quality. Digitally graded
pictures can be more precisely tuned and more consistent than traditional film
prints. Blowups from Super-35 to 2.39 anamorphic format, or from Super-16 to
1.85 format, are much cleaner and sharper than optical blowups. And,
increasingly, multiple printing negatives are being created so that every print can
be a first generation “show” print, eliminating the degradation of the traditional
IP/IN analog duplication process.
In addition to making the film printing negative for traditional cinema
distribution, the Digital Intermediate process produces a digital cinema
distribution master, and the various home video masters. This consolidates the
creatively supervised color correction steps, saving time and money. Digital
Intermediate also enables the use of lower cost 3-perf 35mm and Super-16 film
formats, as well as the mixing and matching various film and digital origination
formats. All of which can represent substantial production cost savings over
traditional 35mm film.
BRIEF DIGITAL HISTORY
Digital technology has changed the way that feature films are made, replacing
traditional processes with new ones that offer expanded creative freedom, faster
results, and lower costs. Hollywood has been slower to adopt digital imaging
technology than most other imaging industries. One of the key hurdles has been
the overwhelming size of a digital movie—a movie scanned, stored and
processed in 2K resolution is around 2 TB in its finished form, and could easily
be 50 TB or more before it is edited. And these numbers are four times larger if
you are working in 4K.
Digital technology began to make inroads into the moviemaking process about
20 years ago. In the mid-1980s, Avid introduced the Media Composer, a
nonlinear editing system that revolutionized the way movies were edited. Avid
wasn’t the first to offer a nonlinear editing system, but was the first to make it
simple to use and relatively inexpensive. In a few short years, the industry
converted from cutting and splicing film on a Moviola to nonlinear editing on
computer-based systems.
Following the pioneering work at ILM, Kodak introduced the Cineon Digital
Film System in the early 1990s to provide an efficient means of converting film
to digital and back to film, with the intention of offering a computer-based
environment for the postproduction of visual effects and for digital restoration.
Kodak established Cinesite, a service bureau that offered film scanning and film
recording services to enable the industry. Within a few short years, visual effects
converted entirely from traditional optical printers and manual rotoscoping
processes to digital compositing, 3-D animation and paint tools.
Kodak’s original goal for Cineon was a complete “electronic intermediate
system” for digitally assembling movies. In the early 1990s, it was not cost
effective to digitize and assemble whole movies this way, but in choosing
computer-based platforms over purpose-built (traditional television)
architectures, these costs came down steadily, driven by the much bigger
investments that other industries were making in computing, networking and
storage technologies.
By the end of the 1990s, Kodak had discontinued the Cineon product line, but
Apple, Arri, Autodesk, Filmlight, Imagica, Quantel, Thomson and other
companies were offering products that provided the necessary tools for the
digital postproduction of movies. Kodak chose to focus its energies on the
demonstration and promotion of a new process dubbed “Digital Intermediate”
that offered to revolutionize the traditional lab process of answer printing and
release printing. The digital intermediate process provided digital conforming
and color grading tools that opened up new creative opportunities, while
improving the quality of the release prints. In 2000 Cinesite pioneered the digital
intermediate process on the Coen Brothers’ O Brother, Where Art Thou utilizing
the Thomson Spirit Datacine and a Pandora color corrector, working in a digital,
tapeless workflow at 2K resolution.
Since that early start, the supporting technology has evolved substantially with
software-based color correctors offered from Autodesk, Filmlight, daVinci and
others, and faster film scanners and recorders available. Within five years, the
industry had embraced the digital intermediate process, with a majority of major
Hollywood films, and many independent films, now finished digitally.
DIGITAL DAILIES
With the move to offline editing in the 1990s, and the increasing scrutiny of
the production process by studio executives, video dailies were widely
embraced. However, while the low-resolution, compressed video formats may
have been good enough for editorial or continuity checking, they were not
sufficient for judging focus, exposure and the subtleties of lighting. For this
reason, most cinematographers demanded traditional film dailies although this
was sometimes overruled for cost savings.
Now that HD infrastructure is widely available in postproduction, and the
costs of the transfer, playback and projection displays have come down to
reasonable levels, HD dailies are commonly used for the critical dailies review
by the cinematographer and the director. Certainly, 1080/24p HDTV transfers
with a Spirit Datacine represent a huge step up in resolution, making it possible
to judge focus and detail in dailies.
However, high quality dailies require more than just HD resolution.
Compression technology is generally used to reduce the cost of transport and
playback. It is critical that the encoding bit rate be high enough to eliminate
compression artifacts, so that the quality of the images can be judged. For
1080/24p this means a bit rate of at least 35 Mbps for MPEG2 or H.264 encoded
content, and while this bit rate is supported by most servers, and HD media like
Blu-ray disks, it is not artifact free.
High quality dailies also require a color-calibrated telecine transfer. Many
facilities offer best-light or timed video transfers with the colorist making the
picture look good on a traditional CRT monitor. While this produces good
pictures for editorial or executive use, it does not produce a picture that looks
like the final film product, and can create false impressions. Furthermore, this
method provides no feedback on exposure level. A better approach that is now
being embraced by some facilities is an extended range, best-light scan that
protects the full range of the original negative, while displaying the picture
through a look-up-table (LUT) that emulates print film. The picture can be
“printed” up or down by applying offsets calibrated to match traditional printer
lights. This provides the exposure feedback that a cinematographer needs, along
with the capability of “printing” the picture up or down in the dailies playback to
explore the range of the negative. Editorial and executive copies are created with
the LUT burned in so that the pictures that they see include the print film
characteristic.
For dailies review, it is important that a color-calibrated digital projector be
used in a dark theatre or location trailer. Small HD monitors or consumer flat-
panel displays are not acceptable for accurate color reproduction or visualization
of the scale of the movie. Recently, several 1080p projectors have been
introduced for the home theatre market that offer good contrast and stable
calibration at a reasonable (under $10,000) price point. However, calibration and
stability is critically important, so it is best to rent these from a facility that can
provide experienced technicians for installation and calibration.
Communicating the creative intent is another challenge. Previsualization or
“look management” products are available from Gamma and Density, Iridas,
Filmlight and others that allow the cinematographer to capture a digital still and
apply color correction on a laptop computer. Alternatively, some
cinematographers use Adobe Photoshop. These systems can create a DPX or
JPEG file to be sent to the dailies colorist as a proxy for guiding the color
correction, along with notes or verbal instructions. However, this method takes
an assistant to capture and catalog the digital pictures on the set, as well as time
for the cinematographer to grade the pictures, so these tools see limited use.
Furthermore, the pictures still require interpretation and visual matching.
Recently, the ASC Technology Committee has defined a format for
interchanging simple color metadata, via a Color Decision List (CDL). Many of
the color corrector manufacturers have implemented the ASC CDL, providing a
means of translating the CDL into their internal color correction parameters.
Facilities such as Laser Pacific (aIM Dailies), Technicolor (Digital Printer
Lights) and E-Film (Colorstream) are early adopters of the ASC CDL into their
internal workflows.
The ASC CDL information can be inserted into the Avid Log Exchange
(ALE) files that are generated in telecine or video dailies, so that it can be
carried through the editorial process and into the conformed DI. Here, the color
correction decisions made on-set or in dailies can be used as the starting point
for the final DI grading. This has been implemented in a few facilities for their
internal workflows, but effective execution requires careful and continuous
calibration of all telecines, scanners and reference displays.
DATA DAILIES
With modern scanners offering high speed scanning capabilities, and with the
cost of storage coming down, it is now possible to forego traditional telecines
and videotape workflows and scan dailies footage directly to data. This approach
has been pioneered by E-Film in their CinemaScan process. Dailies are scanned
to the SAN as 2K DPX files, where they are graded, synched and logged with
software tools. The scanned files are archived to LTO4 tape and stored in a
massive tape library, where the select shots can be pulled based on the editor’s
EDL, and conformed for the DI finishing step later.
SCANNING
Once the film has been edited and is “locked”, the editor provides an Edit
Decision List (EDL) to the scanning facility. There is no need to cut the
negative. The EDL is translated to negative pull lists with keycode numbers
marking the in and out points for scanning. The scanners provide automated
scanning based on frame counts or keycodes, so it is actually easier to work with
uncut camera or lab rolls, rather than compiling the select shots on a prespliced
roll. And the negative is cleaner without this extra handling step.
Several manufacturers provide digital film scanners that are used for digital
intermediate work. The Thomson Spirit Datacine, and its successor the Spirit4K
scanner which provides real-time 2K scans, is widely used. The ARRIscan and
Filmlight Northlight pin-registered scanners are also popular, but run at a
somewhat slower rate. Since the Spirit4K is edge-guided, picture stability
depends on clean film edges and tight transport velocity control. Steadiness is
seldom a problem, but some people prefer pin-registration, which has always
been used for visual effects shots. All of these scanners are calibrated to output
logarithmic printing density in 10-bit DPX files.
Framing is a critical issue since camera apertures and ground glasses vary. It is
extremely important to shoot a framing chart with each production camera and to
provide that chart to the scanning facility for set up and framing of the scanner.
In addition to defining the scan width, it is important to define the target aspect
ratio. With the increasing use of Super-35 (full aperture) cameras, the target
aspect ratio of 1.85 or 2.39 must be specified, as well as whether it is desired to
scan the full height to use for the 1.33 home video version. Table A summarizes
the typical camera aperture dimensions and scan resolutions for popular motion
picture production formats including Academy, Cinemascope, Super-35 and
Super-1.85, 3-perf 35mm and Super-16. All of these are transferred to 35mm
inter-negatives for release printing.
2K OR 4K
One of the biggest debates within the technical community has been the
question of what resolution to use for digital cinema mastering and distribution,
which in itself is a continuation of a longstanding debate about the resolution of
a frame of motion picture negative film. The studios are evenly divided between
two opinions—2K (2048 pixels wide) is good enough and more cost effective, or
4K (4096 pixels wide) is better and will raise the bar. This debate raged for
nearly a year, before DCI arrived at the grand compromise of supporting both 2K
and 4K digital cinema distribution masters, using the hierarchical structure of
JPEG2000 compression to provide a compatible delivery vehicle.
Today, most Digital Intermediate masters are produced at 2K resolution
although more and more are being produced at 4K as costs come down. Working
in 4K requires four times the storage and four times the rendering time as 2K.
The creative color grading process for 4K can be done in essentially the same
time as 2K, because proxies are used to support interactive adjustment and
display. The color correction can be rendered to the 4K images once the reel has
been approved. Several of the software-based color correctors support this
architecture, including products from Autodesk, Filmlight, Digital Vision, and
DaVinci.
So what do you get for working in 4K? Does it produce pictures that are twice
as good? If the original film format is 35mm and the release format is 35mm (the
only viable film distribution format except for 70mm IMAX), then the answer is
no. The difference between 2K and 4K is much more subtle, and very dependent
on the quality of the camera lenses. A seminal paper by Brad Hunt of Kodak in
the March 1991 SMPTE Journal describes the basis for Kodak’s development of
a 4K High Resolution Electronic Intermediate System in 1992 (which became
Cineon), using a system MTF analysis to illustrate the effect of sampling
resolution on the resulting images.1 This is reproduced in Figure 2.
Figure 2. Modulation Transfer Function (MTF) as a function of
sampling structure for Cineon digital file system.
Figure 2. Slope
Figure 3. Offset
Figure 4. Power
Slope
Slope (see Figure 2) changes the slope of the transfer function without shifting
the black level established by Offset (see next section). The input value, slope,
ranges from 0.0 (constant output at Offset) to less than infinity (although, in
practice, systems probably limit at a substantially lower value). The nominal
slope value is 1.0.
Offset
Offset (see Figure 3) raises or lowers overall value of a component. It shifts
the transfer function up or down while holding the slope constant. The input
value, offset, can in theory range from -∞ to +∞ although the range —1.0 to 1.0
will fit most traditional use. The nominal offset value is 0.0.
Power
Power (see Figure 4) is the only nonlinear function. It changes the overall
curve of the transfer function. The input value, power, ranges from greater than
0.0 to less than infinity. The nominal power value is 1.0.
Saturation
Saturation provides a weighted average of the normal color (saturation 1.0)
and all gray (fully desaturated, saturation 0.0) images. The saturation operation
modifies all color components. Color components are weighted by the values
used in most Rec. 709 implementations of saturation. Saturation values > 1.0 are
supported. Values > 4 or so will probably only be used for special purposes.
sat is the user input saturation parameter. inR is the input red color component
value, G green, and B blue. outR is the output red color component value, G
green, and B blue. gray is the fully desaturated gray value, based on the color
compo nent weightings.
Offset
For linear encodings, Offset controls the overall “base fog” of the image. The
values of the entire image are moved up or down together, affecting both
brightness and contrast. This is not traditionally a common operation for linear
data.
Power
For linear encodings, Power controls the contrast of the image.
Saturation
For all encodings, including linear, Saturation controls the saturation—
intensity of the color of the image.
The old telecine Lift function—raising or lowering the darks while holding
the highlights constant—can be achieved via a combination of Offset and Slope.
Similarly, the telecine Gain function can also be achieved via a combination of
Offset and Slope.
Log
ASC CDL operations will have similar effects on images in the common
encodings that are generally log. Those encodings include printing density (e.g.
Cineon, DPX), commonly seen from film scanners and created or imitated by
other sources; and the various log modes output by various digital cameras to
present a more filmlike response with a wider dynamic range (at least until
cameras put out the high dynamic range Academy ACES “scene-referred linear”
floating point format). Some “log” encodings have special handling near the toe
(blacks/crush) or shoulder (whites/roll-off). In those cases, the interaction of
ASC CDL operations and the special regions will have to be evaluated on a case-
by-case basis.
Figure 5c,d.
In digital intermediate, log images will usually have a film print emulation
applied as an output display transform in order to see the color corrected images
as they will be theatrically projected. For this workflow, the ASC CDL is applied
before the film print emulation. (This procedure was applied to the example log
images shown here.)
Slope
For log encodings, Slope controls the contrast of the image.
Offset
For log encodings, Offset controls the brightness of the image while
maintaining contrast—like adjusting the f- or T-stops. This is essentially the
same as Printer Lights but with different values/units.
Power
For log encodings, Power controls the level of detail in shadows vs.
highlights. This is not traditionally a common operation for log data.
Saturation
For all encodings, including linear, Saturation controls the saturation—
intensity of the color of the image.
Figure 6a,b.
ASC CDL INTERCHANGE FORMATS
The ASC CDL allows basic color corrections to be communicated through the
stages of production and postproduction and to be interchanged between
equipment and software from different manufacturers at different facilities. The
underlying color correction algorithms are described above.
When ASC CDL color correction metadata is transferred from dailies to
editorial and from editorial to postproduction, provided that data representation,
color space, and viewing parameters are handled consistently, the initial “look”
set for dailies (perhaps from an on-set color correction) can be used as an
automatic starting point or first pass for the final color correction session. ASC
CDL metadata is transferred via extensions to existing, commonly used file
formats currently employed throughout the industry: ALE, FLEx, and CMX
EDL files. There are also two ASC CDL-specific XML file types that can be
used to contain and transfer individual color corrections or (usually project-
specific) libraries of color corrections.
ALE and FLEx files are used to transfer information available at the time of
dailies creation to the editorial database. New fields have been added to these
files to accommodate ASC CDL color correction metadata for each shot.
CMX EDL files are output from editorial and used primarily to “conform” the
individual shots into the final edit. As there is a convention to include shot
specific metadata as comment fields after the associated “event” in the EDL file,
it made sense to use this mechanism to attach ASC CDL parameters to each
“event”. There are two ways of specifying this in an EDL file—either “inline” or
“via XML reference.”
Figure 6c,d.
Many vendors are currently using XML to support their internal data
structures. The ASC CDL includes a hook so that CDL parameters can reside in
simple XML text files that can be referenced by the CMX EDL format or other
vendor-specific formats. The two types of XML files are:
1. Color Decision List (CDL): files that contain a set of color decisions (a color
correction with a reference to an image) and that may also include other
project metadata.
2. Color Correction Collection (CCC): files which solely contain one or more
color corrections.
Each and every color correction defined in these files has a unique Color
Correction id. Any specific color correction defined in these XML files can be
referenced by its unique ColorCorrection id.
FOR IMPLEMENTORS
The ASC CDL release documents are intended for implementers and vendors.
The release includes details on the interchange formats as well as example code
and test images. To get instructions on how to get the current release, send an e-
mail to [email protected].
Preserves the full range of highlights, shadows and colors captured on-set for
use throughout postproduction and mastering
Preserves the ability to use traditional photometric tools for exposure control
rather than having to compensate for custom or proprietary viewing
transforms in conjunction with a video monitor
Standardized file formats for colorimetric and densitometric data based on the
popular OpenEXR and DPX data containers
Because ACES encodes scene colors, ACES values must be adjusted for the
target display environment and device characteristics to faithfully represent the
recorded images. These adjustments are performed by the Reference Rendering
Transform (RRT) and a display device-specific Output Device Transform (ODT)
that are, in practice, combined into a single transform. Figure 2 also shows the
combined RRT and ODT along with the SMPTE Digital Cinema Reference
projector, a common configuration for mastering theatrical motion pictures.
Curtis Clark, ASC currently chairs the ASC’s Technology Committee and has
done so since it was revitalized in 2002.
Andy Maltz serves as the director for the Academy of Motion Picture Arts and
Sciences’s Science and Technology Council.
1. The terms “log” and “linear” as currently used in motion picture production are loosely defined. “Log”
can refer to some form of printing density values or log-encoded RGB values, and “linear” can refer to
some form of video code values or linear-encoded RGB values. While the use of these terms is deprecated
in ACES-based workflows, it should be noted that log-encoded forms of ACES data are being developed
for use in certain color correction operations and for realtime transmission of ACES data over HD-SDI.
2. RDT (Reference Device Transform) is the ODT for SMPTE Reference Projector Minimum Color Gamut
(Nominal) (see SMPTE RP431-2-2007 Reference Projector and Environment, Table A. 1).
The Cinematographer and the Laboratory
For example:
First Series Second Series
Normal Normal
1 Stop Under 1 Stop Over
11⁄3 Stops Under 2 Stops Over
12⁄3 Stops Under 3 Stops Over
2 Stops Under 31⁄3Stops Over
21⁄3 Stops Under 32⁄3 Stops Over
22⁄3 Stops Under 4 Stops Over
3 Stops Under 41⁄3 Stops Over
31⁄3 Stops Under 5 Stops Over
4 Stops Under
5 Stops Under
Preliminaries
A. Make sure that camera body and lens tests are already completed and
approved.
B. Secure a pair of showcards flat against a wall, end-to-end in a horizontal
fashion. The black side of one of the cards should be placed on the left, the
white side of the second card on the right.
C. Recruit a model with a representative fleshtone. The term “representative” is
somewhat ambiguous in its use here, but that is part of what this test is trying
to determine. Avoid extreme complexions or coloring of any kind unless you
anticipate dealing with them during principal photography. Also, make sure
that your model dresses in neutral shades rather than solid blacks or whites or
any heavily saturated colors.
D. Using a lens from your tested, matched production set, compose the shot so
that the two showcards completely fill the frame. Place your model directly
facing the lens, just slightly in front of the center seam where the two
showcards meet. In 1.85 format, a 40mm lens works well at a distance of
about eight feet and renders a pleasingly medium close-up of the model.
E. Create a series of individual flash cards that the model will hold within the
frame to clearly indicate the exposure information relevant to each take:
NORMAL +11⁄2 -1⁄2 -2
+2 -1 NORMAL
+1⁄2
+1 NORMAL -11⁄2
Lighting
A 2K fresnel is an ideal unit to use with this test. Placed at an angle of about
20 degrees off camera right, make sure the light is evenly spread at full flood
across both the model and the two showcards—with no hot spots or dropoff of
any kind. From this position, the lamp serves a dual purpose by not only
properly illuminating the model but by throwing the model’s shadow onto the
black-sided showcard that covers the left half of frame. The deep, rich, velvety
darkness this provides will serve as an important point of reference when
judging the projected print.
Do not use any diffusion on the lamp and do not add any light to the fill side.
ASA/Exposure Index
Since film speed is a relative concept, the best starting point is to rate the
negative at the manufacturer’s suggested value. Besides providing the
information necessary to choose a single printer light for the run of the show, this
test will also allow the setting of an effective ASA/EI rating for the manner in
which the film is to be exposed.
T-Stop
In the interest of contrast uniformity and the elimination of as many variables
as possible, lock the iris ring at a predetermined T-stop and leave it alone for the
length of the test. It should rest precisely at or close to the primary setting
intended for use across much of the shoot. Measured exposure shifts will be
carried out through adjustments in lighting intensity and a combination of
neutral-density filters and shutter-angle changes. (For the purpose of this article,
the working stop for the test will be T2.8.)
Filters
If plans for principal photography include the use of filters in front of or
behind the lens, slip the appropriate grade into the matte box or filter slot before
you begin the test.
Laboratory Instructions
The camera report should prominently display the following orders:
First pass: print on best light for gray card/normal exposure only.
Second pass: correct each take back to normal in 1⁄2 stop (4 points each)*
increments.
Note well: normal exposures should all print at the same light in all cases.
Beware of ambient light or anything else that might compromise the test’s
integrity.
Be meticulous with meter readings. If you choose an iris setting of T2.8, your
normal exposure should read precisely T2.8 at the model’s face. Measuring
the increase in light level needed to support the overexposure parts of the test
should be handled with equal care.
Do a separate test for each emulsion you plan to use and each lighting
condition you plan on encountering.
Be sure the model clearly displays the placards indicating the proper exposure
for the take being photographed.
Don’t rush.
The Test
First, fill the frame with a gray card. Light it to T2.8 and expose 20 feet at the
same value.
Next, recompose to fill the frame with the black and white showcards,
featuring the model at the center seam.
Following the notations in each column, expose 30 feet for each step as noted:
(Note that after the first normal exposure the light level increases from T2.8 to
T5.6. This is done to facilitate the overexposure takes. The standard iris setting
here is T2.8, so before starting the test, simply light the model to T5.6 and then
use two double scrims on the 2K Fresnel to knock down the intensity to T2.8
when needed.)
If overexposure is to be carried as far as +3 stops, the basic light level must be
increased to T8 to accommodate that portion of the test. Proportional changes
should then be made to the scrims and shutter angle/neutral density filter
combinations.
The Results
When viewing the projected film, refer to the lab’s printer-light notation sheet
that corresponds to the test exposures.
You should speak to your lab contact as to what is considered a “normal”
printing light for the lab you are using; however, we will assume for this article
that a “normal” printer light would read 25-25-25. Roll 1 will now obviously
play all the way through at light 25-25-25. Any exposure changes noted on
screen will thus be a direct result of what was done at the lens. This pass is
especially helpful in gauging color drift as it relates to exposure. It is also a good
indicator of the emulsion’s ability to hold detail at the noted extremes.
Roll 2 is merely a second printing pass of the same negative but with the
identical series of exposures corrected back to normal in measured increments
by the Hazletine timer. Based on the concept of 25 across being normal, refer to
the following boxed chart for the progression of printer lights (assuming a
laboratory printing scale of 8 points = 1 stop).
In this instance, the dailies timer has “helped out” by correcting “mistakes” in
exposure. The second pass thus provides an idea of how far the emulsion can be
stretched before it falls apart. Special attention should be paid to the white and
black showcards that make up the background behind the model. Grain, color
shift and variation in detail will be most readily apparent in these areas. This
pass can also provide information about contrast. If necessary, it may be
requested that the laboratory print “over scale” in order to achieve correction on
an overexposure.
Conclusion
Now that the data needed to decide all critical concerns has been revealed,
decisions that will directly affect the film’s look can be made in an informed
manner. Subjective judgement once again comes into play, but the difference is
that the cinematographer is the one doing the judging.
Usually, the model’s fleshtone will need some tweaking regardless of which
test exposure is most pleasing. Let’s say that Eastman 5213 was used at its
recommended ASA/EI rating of 200. While viewing results of the corrected Roll
2 on screen, it is decided that the grain structure and shadow detail of the take
indicating 1⁄2 stop overexposure (printed back to normal) looks best. By
referencing the lab’s printer light notation sheet, this would render a printer light
of 29-29-29 to start and therefore an effective ASA/EI rating of 160. A desire for
a different sort of fleshtone might lead the cinematographer to order a small
adjustment in the addition of perhaps 1 point red and 2 points yellow. The
resulting printer light of 28-29-31 would then be the one to use during principal
photography. To verify the effect, it would be advisable to shoot an additional
test under identical conditions with the same model, while printing it at these
new numbers.
Hereafter, by having the assistant cameraperson stamp 28-29-31 in the camera
report’s printing instructions box, the cinematographer can be certain of two
things. Besides meeting a specific standard that depends solely on the effort put
into each shot, such items as silhouette effects will indeed come back from the
lab as silhouettes—each and every time. Instead of communicating with such
agonizing vagaries as “print for highlights,” this simple set of numbers conveys
to the dailies timer exactly what is needed in a way that will stand up in court.
This isn’t to say, however, that the printer lights are by any means sacrosanct.
Over the long haul of shooting a feature, modifications are inevitable.
Ultimately, however, what is most important is that the cinematographer is
always the one who chooses how and when to do the modifying.
Richard Crudo, ASC currently serves on the Academy of Motion Picture Arts
and Sciences Board of Governors, and is a former ASC President and current
ASC Vice President. He has photographed the feature films Federal Hill,
American Buffalo, American Pie and Down to Earth.
Adjusting Printer Lightsto Match a Sample Clip
H ere’s a low-tech but usefully accurate method for modifying printer lights
“in the field” to match the color of a film clip, using CC Wratten filters.
You’ll need a light box with a color temperature around 5400°K, a cardboard
mask with two closely-spaced, side-by-side frame-size holes cut into it and a set
of color correction filters of .05, .10, .20, .30, .40, and .50 values in Yellow,
Cyan, Magenta, and Neutral Density. (Kodak Wratten CC filters are expensive
and fragile to use as viewing filters, however they will last a long time if handled
carefully. PC filters retired from darkroom or still photo use are just fine. You
may be able to find less-expensive plastic substitutes.) You’ll also need a loupe
with a field big enough to see at least part of both frames.
Put the mask on the light box, put the sample print film clip (the one to match)
over the left-hand mask aperture. Put the target print (to be reprinted) over the
right-hand aperture. (The mask keeps your eyes from being dazzled by white
light leaking around the film frame.)
It’s a two-step process.
MATCH OVERALL BRIGHTNESS
Looking at the midtones of the two prints, add ND filters under one clip or the
other until the brightness is an approximate match. It’s much easier if there’s a
similar subject in both frames!
COLOR ADJUSTMENT
Estimate the color shift needed to bring the color of the target clip around to
match the sample. For example if the sample is a too blue, add yellow filters. If
it’s too cyan, add yellow and magenta. If the target clip now looks too dark with
the color filters added, remove some ND from the target (or add it to the
sample). With practice, you’ll be able to get a reasonable match. From time to
time, look away for a moment at a neutral color so your eyes will not get tired.
Now add up the color filters (ignore the ND’s for now). Let’s say you added a
total CC .15 magenta and .30 yellow to the target print. Consult the table below
for the correction needed in printer points.
ADJUSTING THE PRINTER LIGHTS
Make the first correction of the printing lights, adjusting color only. The
printer lights are usually listed in Cyan, Magenta, Yellow order on the light card,
or the corresponding printer filters Red, Green and Blue, respectively. But don’t
think about Red, Green and Blue when you’re dealing with color print! The
layers in color print produce Cyan, Magenta and Yellow dye, so if you think in
those terms, you can’t get confused.
Let’s say the hypothetical target print, which needs 2 printer points of
Magenta and 4 points of Yellow to bring it around, originally printed on C 25, M
22 and Y 34.
Adding 2 points of Magenta and 4 points of yellow yields an adjusted printer
light of C 25, M 24 and Y 38.
Now let’s get back to the ND’s. Assume you wound up with .30 ND over the
target print, which was too light as well as needing the color correction. Per the
table, you should add 4 printer points across. So the new, final printing light is C
29, M 28, and Y 42.
If you had to lay the ND filters over the sample print instead, that meant that
the target print was too dark. So subtract the neutral printer point adjustment
instead of adding. If it needs to be .30 daarker, subtract 4 printer points across,
and so forth.
This method requires mid-tones in both clips. If the target clip is mostly
highlights (on the “toe” of the print curve), it will take more correction to
“move” them to where you want. Obviously a burned-out specular highlight on
the print will not change at all from a modest printer light correction.
A very dark scene is also hard to correct. Adjust the lightest detail you can
find, preferably a skin tone.
With experience the filter adjustments will become second nature easy and the
arithmetic automatic; you’ll be calling lights like a film lab timer.
Cinemagic of the Optical Printer
Lin Dunn, the author of this article, is one of the motion-picture industry’s
most accomplished pioneers. Involved with the design and origins of the first
optical printers, Dunn remained vibrantly active in the industry throughout his
life and applauded the introduction of CGI techniques that, in many ways,
obviated many of the needs for his optical printer. Dunn is responsible for the
effects you can and can’t see in Citizen Kane, and literally hundreds of film
projects after that. He was always on the cutting edge, and in his nineties
founded of one of the first companies to develop digital cinema, Real Image
Digital. Dunn’s passing in May 1998 was a loss to all who were fortunate
enough to be on the receiving end of his generous knowledge and skill; his was
also the loss of one of our remaining links to Hollywood’s earliest days. Winner
of the Gordon E. Sawyer Academy Award from the Motion Picture Academy of
Arts and Sciences, and a past President of the ASC, Dunn penned an article that
is as relevant today as when he first wrote it. — Editor
T he earliest optical printers were custom-built by the major studios and film
laboratories, and were usually designed and made in their own shops to fit
their particular requirements. Modern standardized optical printing equipment,
capable of creating the innumerable effects heretofore possible only in the major
studios, became available to the entire motion picture industry in 1943 with the
introduction of the Acme-Dunn Optical Printer, designed and built for the United
States Armed Forces Photographic Units. Later the Oxberry, Producers Service,
Research Products and other optical printers appeared on the market.
Commercial availability of this type of equipment greatly stimulated and
widened the scope of the special-effects field. Even the smallest film producers
could now make motion pictures with special effects limited only by their
imagination and budgets, utilizing the services of growing numbers of
independent special-effects laboratories which could now operate competitively
using equipment available to all.
Developments over the years of more sophisticated equipment, new
duplicating films, special-purpose lenses and improved film-processing
techniques, as well as skilled technicians, have increased the use of the optical
printer to a point where its great creative and economic value is common
knowledge in the motion-picture industry. In more recent years, the adaptation of
computer technology to the optical-effects printer has basically simplified the
control and accuracy of some of its important functions, thus making it much
easier to produce certain complex visual effects at lower cost, as well as to
greatly expand its creative scope. This has made it possible to program, record
and repeat the movement of certain of its devices with such a degree of accuracy
that area-blocking functions can now produce traveling matte composite scenes
heretofore highly impractical, if not impossible. One can truly say that the
creative capability of the modern visual-effects optical printer is only limited by
the creative talent and technical skills of the operator. In recent years, such major
film productions as Star Wars, The Black Hole, The Empire Strikes Back and
Cocoon have all utilized the full capabilities of the modern optical printer to
create a whole new world of imaginative creativity through their extensive use of
very sophisticated motion-picture visual effects. The following list of some of
the work that is done on the modern optical printer will illustrate its vast scope
and tremendous importance to modern filmmaking.
TRANSITIONAL EFFECTS
Employed to create a definite change in time or location between scenes. The
fade, lap dissolve, wipe-off, push-off, ripple dissolve, out-of-focus or diffusion
dissolve, flip-over, page turn, zoom dissolve, spin-in and out, and an unlimited
variety of film matte wipe effects are all typical examples of the many optical
transitional effects possible.
CHANGE OF SIZE OR POSITION
May be used to eliminate unwanted areas, obtain closer angles for extra
editing cuts, reposition action for multiple-exposure framing, including
montages and backgrounds for titles.
FRAME SEQUENCE MODIFICATION
Screen action may be sped up or slowed down in order to: convert old 16 fps
silent films to standard 24 fps sound speed; change speed of action and length of
certain scenes or sections of scenes; provide spot frame modification to give
realism to specific action in fights, falls, chases, etc.; hold a specific frame for
freeze effects and for title backgrounds; add footage for comedy effects; reverse
direction of printing to lengthen action and for special effects use; extend scenes
through multiple-frame printing for action analysis in instrumentation, training
and educational films.
OPTICAL ZOOM
Optical zoom is used to change frame-area coverage and image size during
forward and reverse zooming action in order to: produce a dramatic or impact
effect (according to speed of the move); counteract or add to the speed and
motion of camera zooms or dolly shots; reframe by enlargement and/or add
footage to either end of camera zooms or dolly shots by extending the range of
moves; momentarily eliminate unwanted areas or objects by zooming forward
and back at specific footage points (such as when a microphone or lamp is
accidentally framed in during part of a scene); add optical zoom to static scene to
match camera zoom or dolly in a superimposure. The out-of-focus zoom also is
effective to depict delirium, blindness, retrospect, transition, etc.
SUPERIMPOSURE
Superimposure is the capability used to print an image from one or more films
overlaid on one film. This is commonly done in positioning title lettering over
backgrounds. Also used for montages, visionary effects, bas relief; adding snow,
rain, fog, fire, clouds, lightning flashes, sparks, water reflections and a myriad of
other light effects.
SPLIT SCREEN
Employed for multiple image, montage effects, dual roles played by one actor,
for dangerous animals shown appearing in the same scene with people (as in
Bringing Up Baby, which shows Katharine Hepburn working with a leopard
throughout the picture), where such split screens move with the action. Matte
paintings often utilize this technique when live-action areas require manipulation
within an involved composite scene.
QUALITY MANIPULATION
The quality of a scene, or an area within a scene, may be altered in order to
create an entirely new scene or special effect or to match it in with other scenes.
There are innumerable ways to accomplish this, such as adding or reducing
diffusion, filtering, matting and dodging areas, and altering contrast. Often
library stock material must be modified to fill certain needs, such as creating
night scenes from day; reproducing black-and-white on color film through
filtering, printed masks or appropriately coloring certain areas through localized
filtering; and the combining of certain areas of two or more scenes to obtain a
new scene, such as the water from one scene and the terrain or clouded sky of
another.
ADDING MOTION
Employed to create the effect of spinning or rotating, as in plane and auto
interiors and in certain montage effects; rocking motion for boat action, sudden
jarring or shaking the scene for explosion and earthquake effects; distortion in
motion through special lenses for drunk, delirious and visionary effects.
GENERAL USES OF THE OPTICAL PRINTER
The preceding represent some of the special categories of effects that can be
produced on the optical printer. The following are a few of the more important
general uses employing this useful cinematic tool.
Traveling Mattes
Used to matte a foreground action into a background film made at another
time. The various matte systems in use today require the optical printer in order
to properly manipulate the separate films to obtain a realistic-quality matching
balance between them when combined into a composite. Use of this process has
greatly increased as modern techniques produce improved results at reduced
costs. Motion control, referred to earlier, has greatly widened the scope of this
visual-effects category.
Anamorphic Conversions
The standard optical printer equipped with a specially designed “squeeze” or
“unsqueeze” lens can be used to produce anamorphic prints from “flat” images,
or the function reversed. The possibility of the “flat” or spherical film being
converted for anamorphic projection without serious loss of quality has greatly
widened this field of theatrical exhibition. The manipulations available on the
optical printer also make it possible to scan and reposition any scenes that
require reframing when converted to or from widescreen proportion.
DOCTORING, MODIFYING AND SALVAGING
Some of the important uses of the optical printer are not recognized as special
effects in the finished film, and often are not apparent as such even to skilled
motion-picture technicians. One of these applications is the field of “doctoring”
by modifying scenes that, for a variety of reasons, may not be acceptable for use.
This includes salvaging scenes that are completely unusable due to some
mechanical failure or human error during photography; and also the modification
of stock film material through the various methods noted to fit specific
requirements. Many expensive retakes have been avoided by the ingenious
application of such optical-printing reclamation techniques. The liquid, or
immersion, film gate produces dramatic results in the removal of scratches.
Citizen Kane is an excellent example of scene modifications created on the
optical printer during the postproduction period. New ideas were applied to
existing production scenes for which new supplementary scenes were
photographed and integrated to enhance and create various new concepts.
In It’s A Mad, Mad, Mad, Mad World, an important scene was photographed in
which a truck was supposed to back into a shack and knock it over. The
breakaway shack was rigged to collapse when wires were pulled on cue. Signals
became crossed, and the shack was pulled down well before the truck touched it.
A very costly retake was indicated, so the optical printer was called to the rescue.
The task of correcting the error through a split screen seemed relatively simple
until it was discovered that the camera panned with the falling shack. It then
became necessary to plot and move the split matching point frame-by-frame on
the optical printer to follow the pan. Through this traveling split-screen
technique, the progress of the shack’s falling action was delayed until the truck
had reached the point of impact. Perhaps the entire cost of the optical printer was
saved by this salvaging job alone. Such clever techniques have been used many
times to bring explosions close to people working in a scene, such as in One
Minute to Zero, where a line of so-called refugees were “blown to bits” by
artillery shelling. Split screens in motion and trick cuts, with superimposed
smoke and flame, did the job in a most effective manner.
NEW SYSTEMS
The optical printer is being used to develop new horizons in the creation of
special camera moves within an oversized aperture. This is particularly effective
in the creation of camera movement in a composite scene, such as one involving
a matte painting, thereby giving a greater illusion of reality. VistaVision and
various 65mm negative formats, including 15-perforation IMAX and 8-
perforation Dynavision as well as standard 5-perforation frames, lend themselves
to this technique.
Copying onto 4-perforation 35mm makes possible spectacular pans, zooms,
dolly shots, etc., without sacrificing screen quality, and with full control over
such movements, all of which are created on the optical printer in the
internegative stage and made during the postproduction period. Use of this
technique makes it possible to avoid time-consuming and complicated setups
during production, with the added advantage of flexibility in later change of
ideas.
Probably the most exciting new optical-printing development has been in the
field of electronics. The adaptation of video image transfer through sophisticated
high-resolution scanning systems, in conjunction with the new developments in
cathode-ray tubes, lenses, film-moving mechanisms, special-purpose film stocks
and the latest research in electronic image compositing, have opened up exciting
new vistas in special visual effects. The modification of filmed color motion-
picture images through computerized electronic transfer back to film is making it
possible to create photographic effects on film or tape faster, more economically,
and with a scope of creativity heretofore not possible. The ability to easily and
quickly transfer areas or moving objects from one film to another through their
instantaneous electronic isolation and self-matting will be of tremendous
economic benefit in this area of film production, as well as in stimulating
creativity in the wider use of special effects.
Motion-Control Cinematography
Motion-Control Technique
When working on Star Wars, we started with an empty building and had to
amass, modify and build our motion-control equipment before we could produce
any images. We had created visual “violins” and had to learn to play them.
Fortunately, the picture hit and a large audience showed up for our motion-
control recitals. Since then, many innovations have come about in the equipment
and many excellent motion-control cinematographers have appeared, and with
many specialty techniques. In the studio there are two main techniques for
programming motion files: one is to use start and end positions for each axis of
motion and have the computer generate the moves; the other allows the
cameraperson to generate the move by joystick. It is my opinion that the
computer-generated method is superior for graphics and animation purposes, and
the human interface is best for most miniature and model photography. If shots
are created using a computer, the moves will have mathematically perfect
curves, slow-ins, slow-outs, etc., and will have no heartbeat or verve, especially
in action sequences, therefore becoming subliminally predictable and less
interesting to the audience. The human operator is not interested in mathematical
perfection, rather, they tailor the camera move moment by moment to what is
interesting in their viewfinder. This human sense of curiosity is present in the
work of a talented operator, and it transfers to the audience.
Richard Edlund, ASC currently serves on the Academy of Motion Picture Arts
and Sciences Board of Governors and is a chairman of the Academy’s Scientific
and Technical Awards Committee as well as the Academy’s Visual Effects
Branch. He has been awarded Oscars® for visual effects in Star Wars, The
Empire Strikes Back, Raiders of the Lost Ark and Return of the Jedi.
Greenscreen and Bluescreen Photography
Rotoscoping
“Rotoscoping” or “roto” is another method for making alpha-channel
composites. The alpha-channel masks are created by hand-tracing the outline of
the foreground action frame-by-frame. This technique of tracing live action
(named by its inventor, the pioneer cartoonist Max Fleischer) is now more
refined and widely used than ever, because 3-D conversion depends on it. It is
extremely labor intensive but very flexible, since no special backing or lighting
is required.
Present-day computer-assisted rotoscoping can produce excellent composites.
Nearly all the composites of foreground actors in Flags of Our Fathers (2006)
were created by rotoscoping. Good planning was the key to avoiding difficult
foreground subjects. It helped a lot that many of the actors wore helmets,
simplifying their silhouettes.
Some foregrounds are poor candidates for rotoscoping. For example if the
foreground has a bright, burned-out sky behind it, it will flare into and even
destroy the edge of the foreground. Flare can be very difficult to deal with
(unless of course the final background is also a bright, burned out sky.) Fine
detail and blurred motion are also difficult or impossible to roto, often requiring
paintwork replacement. The skill of the artists is the single most important
factor.
Rough roto masks (“G mattes”) are often used with green screen compositing,
to clean up contaminated areas of the backing, remove unwanted supports, and
so forth.
SCREEN CHOICES: FABRIC, PAINT AND PLASTIC
MATERIALS
The best materials currently available are the result of years of research to
create optimal combinations of lamps. dyes and pigments.
Fabrics
Even an indifferent backing can give good results if it is lit evenly with
narrow-band tubes or LEDs to the proper level (within plus or minus 1⁄3 f-stop)
Spill from set lighting remains a concern.
Composite Components Co.3 offers a fabric that is highly efficient, very light,
stretchy and easy to hang. It must be backed by opaque material when there is
light behind it. The green fabric is fluorescent, so it is even more efficient under
UV-rich sources like skylight. CCC also make a darker material for direct sun
use,.
Following Composite Components’ lead, many suppliers now provide
“digital-type” backings of similar colors. While similar in appearance, some of
these materials are substantially less efficient, which can have a great cost
impact when lighting large screens. Dazian Tempo fabric, a fuzzy, stretchy
material, has a low green or blue saturation when lit with white light, so it isn’t
recommended for that application. Dazian’s Lakota Green Matte material is a
better choice for white-light applications like floor covering; it is resistant to
fading and creasing, and can be laundered. Another major supplier, The Rag
Place in North Hollywood, CA, supplies Optic Green Tempo Fabric, which has a
built-in opaque backing. Its reflective value is between that of Digital Green©
and chroma-key green.
Measured reflective values relative to an 18% gray car (N) are: chroma-key
Green = N, Optic Green =N+2⁄3, Digital Green© +12⁄3 EV.
When using fabric backings, minimize the seams and avoid folds. Stretch the
fabric to minimize wrinkles All green materials will sun-fade with time. If a day-
lit screen must be kept up for several days it should be protected when not in
use.
Paint
Composite Components’ Digital Green© or Digital Blue© paint is the
preferred choice for large painted backings. As with fabrics, there are other paint
brands with similar names that may not have the same efficiency. Equivalent
paints made specifically for this purpose are also available from Rosco. Paints
intended for video use, such as Ultimatte chroma-key paints, can also be used
with good illuminators (lights). A test of a small swatch is worthwhile with
materials whose performance is unknown.
Plastic Materials
Plastic materials are a good alternative to fabric or paint for floor covering.
Fabric can be hazardous if loose underfoot. Painted floors scuff easily and
quickly show shoe marks and dusty footprints.
ProCyc’s Pro Matte plastic material is a good choice for floors or for entire
limbo sets. The material is a good match to Digital Green© and Digital Blue©
paint and fabric. It is tough, scuff-resistant and washable. It is available in sheets,
preformed coves and vertical corners in several radii. It is relatively costly, but
the cost is at least partly offset by time and materials saved in shooting.
Figure 11a & b. Unevenly lit Blue set, actor in the set with shadows
and reflections
Backing defects such as scuffed floors, set piece shadows, and color variations
in the backing as well as minor lens vignetting all disappear. Note that the actor’s
shadows reproduce normally, even where they cross over shadows already on the
backing.
There is a significant limitation: if the camera moves during the shot, the
identical camera move must be photographed on the empty set for the length of
the scene. While it is reasonably quick and simple to repeat pan-tilt-focus
camera moves with small, portable motion control equipment, that equipment is
not always available. Fortunately top camera operators have an almost uncanny
skill at repeating previous moves. Skilled match movers can bring a “wild” clean
pass into useful conformance around the actor, and remove discrepancies with
rotoscoping. Some matchmovers prefer the clean footage to be shot at a slower
tempo to improve the chances that more wild frames will closely match the takes
with the actors.
When it’s not possible to shoot a clean plate, Ultimatte AdvantEdge software
can semiautomatically generate synthetic clean frames. The software can detect
the edges of the foreground image, interpolate screen values inward to cover the
foreground, and then create an alpha using that synthetic clean frame. There are
some limitations; it’s always best to shoot a clean plate if possible.
ILLUMINATORS
The best screen illuminators are banks of narrow-band green or green
fluorescent tubes driven by high-frequency flickerless electronic ballasts.4 These
tubes can be filmed at any camera speed. The tube phosphors are formulated to
produce sharply cut wavelengths that will expose only the desired negative layer
while not exposing the other two layers to a harmful degree. These nearly perfect
sources allow the use of the lowest possible matte contrast (gamma) for best
results in reproducing smoke, transparencies, blowing hair, reflections, and so
forth.
Kino Flo four-tube and eight-tube units are the most widely used lamps. They
are available for rent with “Super Green” or “Super Blue” tubes from Kino Flo
in Sun Valley, CA, and lighting suppliers worldwide. The originators of narrow-
band tubes, Composite Components still supplies Digital Green© and Digital
Blue© tubes tailored specifically to film response. Mac Tech and others have
introduced extremely efficient LED sources with similar characteristics to
narrow-band fluorescent globes.
All these lamps have very high output and can be set up quickly. The light
from these tubes is nearly perfectly monochromatic; there is almost no
contamination. Flickerless power supplies run these units. Some high frequency
fluorescent ballasts and all the LED sources can be dimmed, a great convenience
in adjusting backing brightness.
Large lightweight fixtures like Kino Flo make it easy to evenly illuminate
large backings, and the doors built into most units simplify cutting the colored
light off the acting area.
A good scheme for front-lit backings is to place arrays of green fluorescents
above and below the backing at a distance in front equal to approximately 1⁄2 the
backing height. The units may be separated by the length of the tubes, or brought
together as needed to build brightness. The lamps must overlap the outer margins
of the screen. Keep the subjects at least 15 feet from the screen. The goal is to
eliminate direct green light falling on the actor. Figure 14b shows side and top
views of an actor standing on a platform that serves to hide the bottom row of
lights. If the actor’s feet and shadow are to be in the shot, the platform may be
painted green or covered with green cloth or plastic material.
Note that if a platform is not practical, mirror Plexiglas or Mylar on the floor
behind the actor can bridge the gap from the acting area to the screen, extending
the screen downward by reflection.
A backing can be evenly lit entirely from above by placing a second row of
lamps about 30% further away from the screen and below the top row. The
advantage of lighting from above is that the floor is clear of green lamps.
Lighting from above requires careful adjustment to achieve even illumination.
The overhead-only rig requires about 50% more tubes and spills substantial
green or green light onto the foreground in front of the screen. To film 180-
degree pan-around shots on Universal’s The Fast and the Furious (2001), the ace
rigging crew lit a three-sided backing 30 feet high and more than 180 feet long
entirely from above.
The number of tubes required depends on backing efficiency, the film speed
and the desired f-stop. As an example, six four-tube green lamps are sufficient to
light a 20 by 20 foot Composite Components Digital Green© backing to a level
of f4 at ISO 200. Eight four-tube blue lamps yield f4 with a 20 by 20 foot blue
backing from the same maker.
ALTERNATIVE LIGHT SOURCES
In a pinch, commercial daylight fluorescent tubes or Kino Flo white tubes
wrapped with primary green or primary blue filter sheets can produce good
results. The Rosco CalColor line of primaries works best. Balance to the desired
brightness in the screen color as described below. The downside is great loss of
efficiency; it takes about four filtered daylight tubes to equal the output from one
special-purpose tube.
Figure 14a & b. Side view and top views of screen lit with 6
light banks
Regular 60Hz ballasts can be used with commercial tubes at the cost of weight
and power efficiency. As with any 60Hz fluorescent lamps, 24 fps filming must
be speed-locked (nonlocked cameras are fortunately rare) to avoid pulsating
brightness changes, and any high-speed work must be at crystal-controlled
multiples of 30 fps. These tubes are somewhat forgiving of off-speed filming
because of the “lag” of the phosphors.
Backings can also be front-lit with Primary Green or Primary Blue-filtered
HMI lamps. The only advantage is that the equipment is usually already “on the
truck” when a shot must be improvised. Getting even illumination over a large
area is time-consuming, and filters must be carefully watched for fading. Heat
Shield filter material is helpful. Because of high levels of the two unwanted
colors, HMI is not an ideal source, but it is better than incandescent.
In an emergency, filtered incandescent lamps can do the job. They are an
inefficient source of green light and much worse for blue (less than 10% the
output of fluorescents) so they are a poor choice for lighting large screens. Watch
for filter fading as above.
A green or green surface illuminated with white light is the most challenging,
least desirable backing from a compositing standpoint. White light, however, is
required for floor shots and virtual sets when the full figure of the actor and his
shadow must appear to merge in the background scene. Advanced software can
get good results from white-lit backings with the aid of Screen Correction and a
“clean plate” as described above. Difficult subjects may require assistance with
hand paintwork.
EYE PROTECTION
A word about eye protection: Many high-output tubes produce enough
ultraviolet light to be uncomfortable and even damaging to the eyes. Crew
members should not work around lit banks of these fixtures without UV eye
protection. It is good practice to turn the tubes off when they are not in use. The
past practice of using commercial blueprint tubes was dangerous because of their
sunburn-level UV output.
CHOOSING THE BACKING COLOR
The choice of backing color is determined by the wardrobe or subject color.
The range of permissible foreground colors is wider when the backing can be lit
separately from the actor, rather than when the actor must be photographed in a
white-lit green set (a “floor shot”) for example.
A blue backing is satisfactory for most colors except saturated blue. Pastel
blues (blue eyes, faded blue jeans, etc.) reproduce well. The color threshold can
be adjusted to allow some colors containing more blue than green (such as
magenta/purple) into the foreground. If too much blue is allowed back into the
foreground, some of the blue bounce light will return. Therefore, if magenta
wardrobe must be reproduced, it is prudent to take extra care to avoid blue
bounce and flare. Keep the actors away from the backing, and mask off as much
of the backing as possible with neutral flats or curtains. Saturated yellows on the
subject’s edge may produce a dark outline that requires an additional post step to
eliminate. Pastel yellows cause no problems.
A green backing is satisfactory for most colors except saturated green. Pastel
greens are acceptable. Saturated yellow will turn red in the composite unless
green is allowed back into the subject, along with some of the green bounce or
flare from the original photography. The same precautions as above should be
taken to minimize bounce and flare. Pastel yellow is acceptable. Figure 15 shows
a test of green car paint swatches against green screen. The hue and saturation of
the “hero” swatch was sufficiently distinct from the screen color to pose no
difficulties in matting or reproduction. Bounce light from the screen was
carefully flagged off in the actual shot. Note that none of the colors in the
MacBeth chart are affected except for two saturated green patches, which have
become semi-transparent.
High bounce levels are unavoidable where the actor is surrounded by a green
floor or virtual set: one should not expect to reproduce saturated magenta or
saturated yellow on a green floor without assistance in post.
If the foreground subject contains neither saturated green nor saturated blue,
then either backing color may be used. However, the grain noise of the green
emulsion layer on color negative and the green sensor in a digital camera is
generally much lower than the grain noise of the blue layer. Using a green
backing will therefore result in less noise in shadows and in semi-transparent
subjects. Black smoke in particular reproduces better against a green backing.
Obviously, it is important for the cinematographer and his ally the vfx
supervisor to be aware of wardrobe and props to be used in green screen and
blue screen scenes. Sometimes a difficult color can be slightly changed without
losing visual impact, and save much trouble and expense in post. If in doubt, a
test is always worthwhile. Video Ultimatte Preview (see below) can be
invaluable.
Some visual effects experts prefer blue backings for scenes with Caucasian
and Asian actors, finding it somewhat easier to achieve a pleasing flesh tone
without allowing the backing color into the foreground. Those skin tones reflect
mostly red and green and relatively little blue. For dark-skinned actors, either
backing color seems to work equally well.
Back-lit Backings
Backings can be backlit (translucent) or front-lit. Big translucent backings are
almost extinct due to their high cost, limited size and relative fragility.
Translucent Stewart blue backings gave nearly ideal results and required no
foreground stage space for lighting. Due to lack of demand, Stewart has never
made translucent greenscreens. Front-lit backings are more susceptible to spill
light, but with careful flagging they can produce a result every bit as good as
back-lit screens.
Translucent cloth screens can be back-lit effectively but when back-lit, seams
limit the usable size.
Fluorescent fixtures with UV (black light) tubes will cause Digital Green© and
Digital Red© fabrics and paint to fluoresce without affecting adjacent sets. The
fabric can be lit from the front or the back, seams permitting. Actors and crew
should not be exposed to high levels of UV light.
Front-lit Backings
If the actor’s feet and/or shadow do not enter the background scene, then a
simple vertical green or green surface is all that is needed. The screen can be
either a painted surface or colored fabric. Any smooth surface that can be
painted, including flats, a canvas backing, and so forth, can be used. Fabrics are
easy to hang, tie to frames, spread over stunt air bags, and so on. Please see the
section on Illuminators in this chapter for spacing and positioning of lamps.
Local Color
Of course, skylight is intensely blue, so fill light supposedly coming from the
sky should be blue relative to the key. Likewise, if actors and buildings in the
background are standing on grass, much green light is reflected upward into their
shadows. If the actor matted into the shot does not have a similar greenish fill, he
will not look like he belongs in the shot. Careful observation is the key. In a
greenscreen shot, the bounce light from grass is low in both brightness and
saturation compared to the screen color, so that color cast can be allowed in the
composite foreground while still suppressing the screen. The same is true of sky
bounce in a bluescreen shot.
A day exterior shot will often shoot in the f5.6 to f11 range or even deeper.
Fortunately, efficient lighting and high ASA ratings on films and sensors permit
matching these deep f-stops on the stage. In a day car shot, for example, holding
focus in depth from the front to the rear of the car contributes to the illusion.
Figure 17 shows a 28′-wide screen lit with 16 four-tube Kino Flo lamps, plus
two HMI “helper” lamps with green filters on the sides. This combination made
it possible to film at f11 with 200 ASA Vision 2 negative. Curtains at left, right,
and top made it easy to mask off portions of the screen outside the frame.
Of course, when it’s possible to film foregrounds like this one in daylight, so
much the better.
Green illumination from the backing can be made negligible by keeping the
actors away from the backing (at least 15 feet, 25 feet is better) and by masking
off all the backing area at the backing that is not actually needed behind the
actors. Use black or neutral flags and curtains (The rest of the frame can be filled
in with window mattes in compositing.) Any remaining color cast is eliminated
by the software.
Screen reflections are best controlled by reducing the backing size and by
tenting the subject with flats or fabric of a color appropriate to the background.
In a common worst case, a wet actor in a black wetsuit, the best one can do is to
shoot the actor as far from the screen as possible, mask the screen off as tightly
as possible, and bring the environmental bounce sources fully around to the
actor’s off-camera side, without, of course, blocking the screen. A back cross
light will of course wipe out any screen reflection on the actor but it will look
false if it’s not justified by the background lighting.
Big chrome props and costumes present similar challenges. Since they present
the whole crew with a huge headache (every light shows, and sometimes the
camera crew as well) it is usually not too difficult to arrange modifications to
these items. When the visual effects team is brought in early on, problems like
these can be headed off in the design stage.
A common reflection challenge is a Plexiglas aircraft canopy or a compound-
curve spacesuit helmet which, depending on the lighting angle and camera
position, can show every lamp and bounce source. A bounce source for a canopy
shot must be uniform and surround the canopy 180 degrees on the camera side.
Sometimes the best solution is to shoot without the Plexiglas and track a CG
model back in, in the composite. An advantage to CG Plexiglas is that it can
reflect the composited background.
Reflections can be disguised with dulling spray, but sometimes they cannot be
eliminated. In the worst case, reflections make “holes” in the matte that must be
filled in digitally in post. Pay particular attention to the faces of perspiring
actors, which can be very reflective. Of course, when the actor must stand in the
middle of a green-painted virtual set, some green contamination is unavoidable;
it will be removed by the compositing software.
Sometimes reflections are desirable: Sheets of mirror Mylar or Plexiglas can
extend a screen by reflection, even below the stage floor. Actors can walk on
mirror Plexiglas to be surrounded by the screen’s reflection. (Of course their
own reflection must be dealt with.)
In a scheme devised by the late Disney effects wizard Art Cruickshank, ASC,
an actor on a raft in a water tank was shot against a Sodium matte backing. The
backing and the actor reflected strongly in the water. This enabled the Disney
optical department to matte the actor and his reflection into ocean backgrounds.
Cruickshank’s method was revived and used effectively in blue screen shots in
Daylight (1996) and more recently in greenscreen shots in Bruce Almighty
(2003) where Jim Carrey and Morgan Freeman seem to be walking on Lake
Erie, while actually standing in a shallow water tank on the back lot.
The spillway at the back of both tanks ensure a seamless transition between
the screen and its reflection in the water.
The human eye quickly compensates for small light changes; it is not a good
absolute measuring device. (It is however superb at comparisons.) It is necessary
to use a spot brightness meter and green filter to check for uniform brightness. A
digital camera with a computer display is also useful for making a quick check
of lighting uniformity in the three color channels.
In backlight, because of the shallow angle between the camera and floor, the
floor will not appear as green as the back wall. A diffused, polarized white-light
“glare” component is reflected by the floor because of the shallow angle. For
holding good shadows in backlight it is essential to use a polarizing filter over
the camera lens. The HN38 is recommended. Rotate the filter until the floor
glare is canceled. Ideally, the backlights should be polarized too, but it is rarely
done. Large sheets of polarizing plastic are available up to about 19″ wide; they
can be protected against heat with Heat Shield reflecting filter material. Of
course, HMIs emit less heat than tungsten lamps to begin with.
The composite in Figure 21 might be improved by adding a slight greenish
density and image shift to the background where it is seen through the thick
glass table top
LIGHTING TO ELIMINATE THE SHADOW (VLAHOS
TECHNIQUE)
1. Light the entire green set uniformly with large area diffused light sources.
2. Check uniformity as noted above.
3. Place the actor in position. If he casts a shadow, add additional low-level
lighting to return the light level in the shadow to its original level.
4. Add a modest key light to create the desired modeling, and ignore the shadow
it casts. The added key light will cause a shadow to be visible to the eye, but
because the key light did not affect the green intensity of the floor in the
shadow it has created, the shadow can be made to drop out in compositing.
Tracking Markers
When the foreground camera moves, the background must move
appropriately. Unless foreground and/or background can be photographed with a
motion-control camera, tracking data must be extracted from the foreground
image and applied to the background during compositing. This process is called
Matchmoving.
TV Monitors
It’s often necessary to matte TV images into monitors when the desired on-
screen material is not available before shooting. When the monitor is live, the
best approach overall is to feed it a pure green signal and adjust its brightness to
match the shooting stop. With this approach the room reflections in the monitor
surface can carry over believeably into the composite, the camera can operate
freely, and actors can cross over the screen without difficulties in post. Watch for
reflections of the screen on actors. Where the monitor is just a prop, it’s often
possible to rig a backlit green fabric screen in or behind the monitor. If set
lighting permits the monitor to be front-lit to a sufficiently high level, the
monitor can be painted green or covered with green fabric behind the glass face
plate. The edges of the monitor usually provide all the tracking data needed in
operated shots; no on-screen markers required.
COSTS AND CONSEQUENCES
Film Photography: Choosing a Camera Negative
Some camera negatives are better suited to composite work than others.
Ideally, one would choose the finest grained, sharpest film available. It is also
important to have low cross-sensitivity between the color layers. Foreground and
background film stocks do not have to match, but of course it’s helpful if they
have similar grain and color characteristics.
Kodak Vision 2, 100T and 200T (tungsten balance), films are ideal for green
and blue backing work. The dye clouds are very tight and well defined. Vision 3,
500T, the latest in a series of remarkably fine-grain high-speed films, as one
would expect is still grainier than the lower speed films. While the 500T film is
not ideal, a well-exposed 500T negative is much better than a marginally
exposed 200T negative!
An interlayer effect in these films produces a dark line around bright
foreground objects (such as white shirts) when they are photographed against a
green screen. Software can deal with this effect.
Kodak Vision 2, 50-speed daylight film produces superb results in sunlight,
with very low shadow noise, but require high light levels on stage.
If these 100T and 200T films cannot be used for aesthetic reasons, one should
still pick the finest grain emulsion compatible with lighting requirements. Be
aware that additional image processing (and cost) may be required. A few
negative emulsions have so much cross-sensitivity between the color layers that
they should not be used.
Film emulsions are constantly evolving. As an example, recent improvements
in red sensitivity in some emulsions have been accompanied by more sensitivity
to infra-red reflected from costumes, altering their color noticeably. This effect is
easily dealt with by filtration—if you know it’s there. A quick test of actors and
costumes is always worthwhile.
Spatial resolution
Spatial resolution is broadly related to the number of photosites (light-
sensitive elements) available for each color. In single-chip cameras, the green,
red, and blue photosites are on a single plane in a mosaic geometry. Depending
on the camera, groups of four to six adjacent photosites are sampled and
interpolated to create each full-color pixel. (The variation in sampling methods is
the reason that there is not necessarily a relationship between pixel count and
actual resolution of a given camera.) Most digital cameras use a mosaic called a
Bayer Array on which there are half as many blue photosites as there are green
photosites. Likewise there are half as many red photosites as green photosites.
The “missing” values are derived through interpolation from adjacent pixels in
the “de-Bayering” operation. Since human visual acuity is greatest in the green
wavelengths, Bayer’s array gives excellent visual results from an optimally small
number of photosites.
Even in the best high-resolution Bayer arrays the blue and red image is still
half the resolution of the green image, which limits the resolution and fine detail
of the mask image.5 To address this and other image quality issues, a few high-
end single-sensor cameras (Panavision’s Genesis, Sony F35) have a 35mm-film-
sized sensor with full resolution in all three colors. (Although the sensors in the
two cameras are nearly identical, at this writing the F35 has the edge in dynamic
range.)
In three-chip cameras like Sony F23, the color image is split into green, red,
and blue images by a beam-splitter behind the lens. Each component color is
imaged on one of three full-resolution chips, so there is no resolution loss in the
red and blue channels, and no need to interpolate color values. The F23 uses 2⁄3
HD resolution sensors, smaller than 35mm film, which results in greater depth of
field (similar to that of 16mm), which some filmmakers love and others find
undesirable. F23’s native output is a 4:4:4 pixel-for-pixel uncompressed image,
which when correctly processed yields first-class composites.
“4:4:4” does not refer directly to RGB bandwidth, but rather to “YUV”. The Y
channel carries the luma or brightness information while U and V are the
channels from which the color information is derived (similar to Lab color space
in Photoshop). In a 4:4:4 recording, every channel is recorded at the full color
depth. “4-4-4” is actually a misnomer, carried over from standard definition D1
digital video. Because it’s well understood to mean full bandwidth in all three
channels, its use has continued into the high-definition-and-higher digital cinema
world.)
Arri Alexa, with a 35mm-film-sized Bayer Array sensor, on paper isn’t the
winner of the Pixel Count contest. Nevertheless Alexa has produced some of the
best composite results to date, thanks to dynamic range at least the equal of
present-day film negative and extremely high quality on-board image
processing.
RED Epic and Sony F65, examples of a new generation of 4K-and-higher
Bayer Array cameras, have produced state-of-the-art composite work. F65’s
huge dynamic range is particularly useful. At 4K and above, detail loss due to
de-Bayering is less of a factor. Any untried camera should of course be tested
with the actual subject matter.
Recording
Recording in “data mode” gives maximum flexibility and best quality in post.
“Data mode” records the uncompressed data (as directly off the camera sensor as
the camera’s design allows) to a hard disk. This is often called “Raw” mode, but
beware: at least one camera’s (RED) “Raw” mode is in fact compressed. Since
Raw data cannot be viewed directly, a separate viewing conversion path is
required to feed on-set monitors.
If recording in data mode is not possible, shoot material intended for
postcompositing as uncompressed 4:4:4 full-bandwidth HD (or better) video
onto a hard drive or a full-bandwidth VCR, such as Sony’s 4:4:4 SR format
machines. While Arri Raw is the preferred output from Alexa cameras, Alexa
can also record in Apple ProRes 4444, a remarkably high quality compressed
format that has produced good composite results.
To sum up, resolution numbers are not the whole story, since some cameras
trade off resolution for color depth. Test your available camera and recorder
choices.
An Imperfect World
You may have no choice but to shoot or record with 4:2:2 equipment. While
4:2:2 is not ideal, don’t forget that the last two Star Wars films were shot with
2
⁄3″ 4:2:2 cameras, cropping a 2.40 slice from the center, including thousands of
green screen composites. Test the camera on the subject matter. 4:2:2 can
produce a satisfactory result in greenscreen (since the green channel has the
highest resolution in these cameras), but one should not expect the ultimate in
fine edge detail. (Consumer cameras typically record 4:1:1, and are not
recommended for pro visual effects use.)
Whatever the camera, it can’t be overemphasized that any edge enhancement
or sharpening should be turned off. The artificial edges that sharpening produces
will otherwise carry into the composite and cannot be removed. If sharpening is
needed, it can be added during compositing.
FILTRATION
In general no color or diffusion filters other than color-temperature correction
should be used on the camera when shooting green or blue screen work.
Compositing can be called “the struggle to hold edge detail”; obviously low-con,
soft effects or diffusion filtering that affects the edge or allows screen
illumination to leak into the foreground will have an adverse effect. For that
reason, smoke in the atmosphere is not recommended; it can be simulated
convincingly in the composite.
To ensure that the filter effect you desire will be duplicated in the composite,
shoot a short burst of the subject with the chosen filter, making sure it is slated as
“Filter Effect Reference”.
Keylight
At this writing, Keylight is the most-used package, thanks to its bundling into
After Effects Professional and Nuke. A straightforward interface makes it very
easy to use.
Keylight was developed originally at London’s pioneering Computer Film
Company by Wolfgang Lempp and Oliver James. It is marketed worldwide by
The Foundry.
Ultimatte
Ultimatte and Ultimatte AdvantEdge, are still the tools of choice for difficult
shots. AdvantEdge borrows from Ultimatte’s knockout concept by processing
the edge transitions separately from the core of the foreground image, blending
them seamlessly into the background without loss of detail.
The deep and rich user controls require an experienced operator to get the
most from the software. The interface works as a “black box” within the
compositing package, which can complicate workflow. One benefit of this
architecture is that the interface is identical in the wide range of supported
software packages. In 2010, Ultimatte AdvantEdge software was bundled into
Nuke, the first implementation in 32-bit color depth.
Ultimatte software was the first of its kind; it was derived from the original
film Color Difference logic created by Petro Vlahos. The digital implementation
won multiple Academy Awards. Ultimatte real time video hardware compositing
devices are also available from the company.
Primatte
Primatte was orginally developed at Imagica Japan by Kaz Mishima. The
unique polyhedral color analysis allows fine-tuned color selections between
foreground and background. The user interface is intuitive and uncomplicated
while offering many options.
Figure 1.
The actual height of the lens above ground level should be the
same as the height in scale above the ground level miniature.
E.g., If the actual lens height is 8 feet, and the scale of the model
E.g., If the actual lens height is 8 feet, and the scale of the model
is 1⁄2 inch equals 1 foot, then the lens height above the
miniature’s ground level should be 8 feet in the scale of the
miniature: (1⁄2″ x 8 = 4″).
Depth of field must be sufficient to carry focus from the nearest
point on the miniature to infinity. Use the Depth of Field Chart
in this manual to determine the aperture needed.
COLOR TEMPERATURE
Color temperature describes the “true” temperature of a “black-body radiator”
and thereby completely defines the spectral energy distribution (SED) of the
object. When the object becomes luminous and radiates energy in the visible
portion of the spectrum, it is said to be incandescent. Simply stated, this means
that when an object is heated to an appropriate temperature, some of its radiated
energy is visible.
The color temperature is usually described in terms of degrees Kelvin (°K).
The first visible color when an object is heated is usually described as “dull
cherry red”. As the temperature is increased, it visually becomes “orange,” then
“yellow,” and finally “white” hot.
One of the most important features of incandescent radiators is that they have
a continuous spectrum. This means that energy is being radiated at all the
wavelengths in its spectrum. The term “color temperature” can only be properly
applied to radiating sources that can meet this requirement. When the term
“color temperature” is applied to fluorescent lamps (or other sources that do not
meet the criteria for incandescence), it really refers to “correlated color
temperature.”
CORRELATED COLOR TEMPERATURE
The term correlated color temperature is used to indicate a visual match where
the source being described is not a black body radiator. The term is often abused,
an example being its application to such light sources as mercury-vapor lamps.
From a photographic standpoint, the correlated color temperature can be
extremely misleading. It is important to keep in mind that its connotations are
visual. It is a number to be approached with extreme caution by the
cinematographer.
See the Correlated Color Temperature chart on page 821.
THE MIRED SYSTEM
When dealing with sunlight and incandescent sources, the MIRED system
offers a convenient means for dealing with the problems of measurement when
adjusting from one color temperature to another. This system is only for sources
that can truly be described as having a color temperature. The term MIRED is an
acronym for Micro Reciprocal Degrees. The MIRED number for a given color
temperature is determined by using the following relationship:
Sunlight should not be confused with daylight. Sunlight is the light of the sun
only. Daylight is a combination of sunlight and skylight. These values are
approximate since many factors affect the correlated color temperature. For
consistency, 5500°K is considered to be Nominal Photographic Daylight. The
difference between 5000°K and 6000°K is only 33 MIREDs, the same
photographic or visual difference as that between household tungsten lights and
3200°K photo lamps (the approximate equivalent of 1⁄4 Blue or 1⁄8 Orange
lighting filters).
As a convenience, refer to page 835 to determine the MIRED values for color
temperatures between 2000°K and 10,000°K in 100-degree steps.
Filters which change the effective color temperature of a source by a definite
amount can be characterized by a “MIRED shift value.” This value is computed
as follows:
MIRED shift values can be positive (yellowish or minus blue filters) or
negative (blue or minus red/green filters). The same filter (representing a single
MIRED shift value) applied on light sources with different color temperatures
will produce significantly different color-temperature shifts. Occasionally, the
term “decamireds” will be used to describe color temperature and filter effects.
Decamireds are simply MIREDs divided by 10.
COLOR RENDERING INDEX
The Color Rendering Index (CRI) is used to specify the stated characteristic of
a light source as it might be used for critical visual color examinations, such as
in color matching or inspection of objects. The CRI is established by a standard
procedure involving the calculated visual appearance of standard colors viewed
under the test source and under a standard illuminant. The CRI is not an absolute
number, and there is no relative merit to be determined by comparing the CRIs
of several sources.
The CRI is of importance photographically only when it is between 90 and
100. This is accepted to mean that such a source has color-rendering properties
that are a commercial match to the reference source. For example: the HMI
lamps have a CRI of 90 to 93, referred to the D55 standard illuminant (D55 is
the artificial match to standard daylight of 5500°K).
DEALING WITH ILLUMINATION DATA
1. Lighting Quantities — Intensity
There are two ways that intensity information is normally shown. Most
lighting manufacturers supplying instruments to the motion-picture industry tend
to present their data in a rectangular format. The polar presentation is more
likely to be encountered with commercial/industrial-type fixtures.
Where the intensity distribution of a lighting source is known, the illumination
produced by the unit can be calculated using the inverse square law. This is
expressed as follows:
Example: For a distance of 50 feet and a known beam angle of 26 degrees, what
is the coverage diameter of the beam (50% of the center)?
Fluorescent Lighting
There is now a considerable selection of professional lighting fixtures, some
of which are very portable and compact, that utilize a range of fluorescent lamps
which closely approximate 3200°K and 5500°K lighting. These utilize high
frequency (25,000Hz) electronic ballasts which completely eliminate any
concerns regarding possible “flicker” problems from operation at usual line
frequencies. This system was perfected by Kino Flo. The fluorescent tubes used
in these systems have a typical life of 10,000 hours.
Noncolor-correct fluorescents may be corrected by using a series of either
minus green (magenta) or plus green filters on the lamps or camera. Some color-
correct fluorescent tubes still may have some green spikes in their output when
they get too hot. This can be easily taken care of with these filters. (See charts
pages 832 and 839.)
Enclosed AC Arc Lamps
Most of these lamps are operated from alternating current sources only and
require the use of a high-voltage ignition device to start and restrike them when
hot, as well as a ballasting device to limit the current.
HMI™ Lamps
The most widely used of the new types of photographic enclosed AC
discharge lamps are known as HMIs. They are made in wattages ranging from
125–18,000. The chart on page 836 illustrates the various versions of this light
source.
These are considered medium-length arc types and are fundamentally mercury
arcs with rare earth halide additives. The color-temperature range is normally
quoted as being between 5600°K and 6000°K with a tolerance of ±400°K. The
CRI of all these types is 90 or more, and they are dimmable and capable of being
restruck hot.
As the power to the lamp is reduced further, color temperature increases and
the CRI decreases. Where the light output needs to be reduced, it is preferable to
use neutral density filters on the luminaire in order to avoid any possibility of a
shift in color characteristics.
CalColor™ Filters
Rosco Laboratories in conjunction with Eastman Kodak has recently created a
family of filters for motion-picture lighting for which they were jointly awarded
an Academy Award for Scientific and Technical Achievement. This is Rosco
CalColor™, the first system of lighting filters specifically related to the spectral
sensitivity of color negative film.
These filters are very precise equivalents to the established range of the very
familiar “CC” filters. The Series I colors include the primaries blue, green and
red, along with the secondaries yellow, magenta and cyan. The Series II will
include six intermediaries, two of which are available at this writing, pink and
lavender. All colors are produced in the familiar 15, 30, 60 and 90 designations
(1⁄2, 1, 2 and 3 stops).
All of the colors are produced on a heat-resistant base. During manufacture
the CIE references are continuously monitored by online computerized
colormetric equipment, which ensures the consistency of product from run to
run. The CalColor™ products are available in sheets (20″ x 24″) and rolls (48″ x
25′) .
The principle of this system is that each color enhances the individual color
elements at each light source to which they are applied. For example,
CalColor™ 90 Green selectively enhances green transmission by reducing the
blue and red transmission by three stops. A CalColor™90 Magenta enhances the
blue and red transmission by reducing the effective green transmission by three
stops. See CalColor chart on page 834.
Another feature of the CalColor™ system relates to the colorant, selections
that were made with concern for the purity of each color. The colors finally
presented are so “clean” that they can be combined with fully predictable results
(i.e., combining 30 Cyan (-30R) and 15 Blue (-14G, -16R) results in a Light
Steel Blue filter (-14G, -46R)).
OF NOTE…
OF NOTE…
OF NOTE…
The control equipment for these strobes permits the addition of delay to the
pulse in degree increments. The position of the shutter will move either forward
or backward in relationship to the gate until it is in the proper position. For reflex
cameras the strobe fires twice for each frame, once to illuminate the subject and
a second time to illuminate the viewfinder.
COMMERCIAL/INDUSTRIAL LIGHT SOURCES
With today’s readily available color-corrected light sources, it is much more
cost-effective to change out the globes at a location rather than struggle with
correcting the existing sources with gels and filters.
However, most filmed television shows transferred and corrected
electronically require only that the location illumination is the same color
temperature; correction to “normal” can happen in the telecine suite.
LUMINAIRES
Fresnel Lens Spotlights
Fresnel spotlights are made for standard incandescent and tungsten halogen
incandescent sources, and also for the range of HMI, CID and CSI arc discharge
lamps. The range of wattages, taking into account all types, is from 100–24,000.
These luminaires represent the most widely used motion-picture lighting units.
They provide the means for changing the beam diameter and center intensity
through a relatively broad range. Using standard incandescent lamps, the spot-to-
flood ratio may be 6 to 1 or so, and with a tungsten-halogen lamp it may be
possible to extend this ratio to 8 or even 9 to 1 under some circumstances.
The optical system of these luminaires is the same for all the variations that
may be presented. The light source and a spherical reflector are located in a
fixed relationship to one another. This combination of light source and back
reflector is designed so that the spherical reflector reflects the energy being
radiated toward the back of the housing through the filament and toward the
lens. The effect intended is that the energy radiated to the lens appears to come
from a single source. The combination of reflector and light source is moved in
relation to the lens to accomplish the focusing.
One of the most important features of the Fresnel lens spotlight is its ability to
barndoor sharply in the wide flood focus position. This property is less apparent
as the focus is moved toward a spot (at spot focus it is not effective at all). The
barndoor accessory used with this spotlight provides the cinematographer with
the means for convenient light control. The sharp cutoff at the wide flood is, of
course, due to the fact that the single-source effect produces a totally divergent
light beam. The action of the barndoor, then, is to create a relatively distinct
shadow line.
Occasionally it may be desirable to optimize the spot performance of these
units, and for this situation “hot” lenses are available. These tend to produce a
very narrow beam with very high intensity. It is important to remember that the
flood focus is also narrowed when these lenses are used.
Open Reflector Variable-Beam Spotlights
These are typically the tungsten-halogen open reflector spotlights. There are
also some low-wattage HMI-types available. These nonlens systems provide
“focusing” action, and therefore a variable diameter beam, by moving the light
source in relationship to the reflector (or vice versa). These types of units are
available for sources ranging from 400–2,000 watts. One of the drawbacks of
this system, when compared with the Fresnel lens spotlights, is that there are
always two light sources operative. The illumination field produced by these
systems is the sum of the light output directly from the bulb and the energy
reaching the field from the reflector. The use of the barndoor accessory with
these lights does not produced a single shadow due to this double-source
characteristic. Typically a double shadow is cast from the edge of the barndoor.
The great attraction of these luminaires is that they are substantially more
efficient than the Fresnel lens spotlights. Typical spot-to-flood intensity ratios for
these types of units is between 3:1 and 6:1.
Tungsten-Halogen Floodlights
A variety of tungsten-halogen floodlighting fixtures take advantage of these
compact sources. Two of the more typical forms are treated here. These fixtures
are available in wattages from about 400–2,000.
There are types of “mini” floodlights using the coiled-coil, short-filament,
tungsten-halogen lamps, which provide very even, flat coverage with extremely
sharp barndoor control in both directions. Due to the design of the reflector in
this system, the light output from this fixed-focus floodlight appears to have a
single source. This accounts for the improved barndoor characteristics.
Cyclorama Luminaires
These lighting fixtures were originally developed for lighting backings in
theater but have broad application in similar situations in film. Because of the
design of the reflector system, it is possible to utilize these fixtures very close to
the backing that is being lit and accomplish a very uniform distribution for a
considerable vertical distance. Typically these units are made for tungsten-
halogen linear sources ranging from 500–1,500 watts.
Based on the variations in design, some of these may be used as close as 3 to 6
from the backing being illuminated. The spacing of the luminaires along the
length of the backing is in part determined by the distance of these fixtures from
the backing itself.
Soft Lights
Soft lights, which attempt to produce essentially shadowless illumination, are
made in wattages from 500 up to about 8,000 and typically utilize multiple
1000w linear tube tungsten-halogen lamps. The degree of softness is determined
by the effective area of the source.
The Aurasoft™ is unique, in that it produces a greater area of coverage than a
comparable conventional unit, at a full stop advantage in light level and with
comparable “shadow-casting” character. These units can be quickly converted in
the field between tungsten-halogen and HMI light sources.
Also available is a plastic tubular diffuser with a reflector at the closed end
which is fitted at the open end, with a tungsten-halogen spotlight. The
configuration allows for the unit to be easily hidden or placed in a corner to
provide soft light that can be used very close to the actors.
The helium filled balloons are designed to contain either tungsten-halogen or
HMI sources in various combinations. These balloons are tethered and can be
used up to an altitude of about 150 feet (45 meters). They range in size from
approximately 4 feet in diameter (1.2 meters) to tubular shapes as much as 22
feet long (6.6 meters) by 10 feet in diameter (3 meters). There is a range of
power levels, with tungsten halogen lights up to 16,000 watts and HMI lights up
to 32,000 watts.
Fig 5. Reflector systems of various “soft” lights
Beam Projectors
A luminaire consisting of a large parabolic mirror with the globe filiment
placed at the focal point of the mirror, so as to produce a parallel beam of light.
Sizes are described by the diameter of the mirror: 18″, 24″, 36″. The lamp source
is either HMI or tungsten. In by gone years the source was the carbon arc, thus
the old name “sun arc”. The beam width can be varied a small amount. A series
of baffle rings are placed to cover the filament area to eliminate nonparallel rays
of light.
Snoots
This is a funnel-shaped device used to limit the beam of a Fresnel spotlight.
Available in various diameters.
Scrim
The type of scrim referred to here is placed directly in the accessory-mounting
clips on a luminaire. This type of scrim is normally wire netting, sometimes
stainless-steel wire, which is used as a mechanical dimmer.
The advantage of the scrim is that it permits a reduction in light intensity in
several steps (single and double scrims) without changing the color temperature
or the focus of the luminaire. Contrary to popular belief, it is not a diffuser.
The half-scrim permits the placement of scrim material in only half of the
beam and is widely used on Fresnel spotlights. It overcomes the problem
encountered when the Fresnel is used at fairly high angles. The portion of the
beam striking the floor or objects near the floor closest to the luminaire produces
intensities that are too high to match the desired level at the distance associated
with the center of the beam. The reason for this, of course, is the substantial
variation in the distances that the illumination energy travels. The half-scrim
applied on the portion of the beam impinging on the nearest objects can
overcome this problem.
Gel Frames
Different forms of these holders are made and designed to fit into the
accessory clips on the front of most luminaires. They permit the use of various
types of plastic filter materials to modify the characteristics of the beam. Color
media may be put in these holders to affect color, and a wide range of diffusion
products are available.
Electrical Dimmers
The old-fashioned resistance and autotransformer dimmers have given way to
the solid state SCR dimmer systems. Computer management DMX controllers
cannot only control the intensity of each luminaire, but can switch cues, replug
circuits, precisely control the duration of a dim, and control many other
accessories (color wheels, lamp movement and focus). All of these cues can be
recorded and stored in the computer for replayability. With arcs, mechanical
shutters are motorized to execute the dim.
GRIP ACCESSORIES FOR LIGHT CONTROL
Diffusers
There are various diffusion materials sewn on wire frames of different types
and size which permit the diffusion of both artificial and natural sources.
They are translucent materials (various textiles) that truly act as diffusion.
When supplied in very large sizes and supported from a single point, they are
called butterflies; when the frame becomes extremely large and is supported
from two or more points, it is called an overhead.
Reflectors
Reflector boards are widely used for redirecting sunlight and modifying its
characteristics so that it is suitable for use as set illumination and fill light. These
boards have been surfaced with various reflecting media, usually sign-painter’s
leaf, either silver or gold.
Most reflectors have two sides, a soft side and a more mirror-like surface,
commonly referred to as the “hard” side.
Egg Crates
Large, fabric egg crates can be stretched in front of soft diffusion material to
control spill light.
Fig. 9. Visible Light Spectrum
Violet: 380-430nm; Indigo: 430-450nm; Blue:450-480nm;
Green: 510-550nm; Orange: 590-610nm; Red: 610-760nm
LED Lighting for Motion Picture Production
by Frieder Hocheim
ASC Associate Member
O ver the last four years LED lighting fixtures have been working their way
into the motion picture industry. Lead by companies such as Litepanels
Inc., products have been introduced that effectively exploit the inherent
characteristics of LED technology: low DC power and amperage draw, low heat,
and dimmable without color shift. LEDs are primarily powered by low-voltage
DC with low-energy demands. This has enabled the design of some innovative
battery-operated fixtures. Untethered from a power cable, the instruments have
provided a handy, easy-to-rig fixture well suited to the fast-paced shooting styles
of today’s production environments. The use of multicolored LEDs based on
RGB principles or the newer multicolored LED mixing allow for products that
expand the potential for accurate spectral displays.
A number of companies have emerged over the past few years that are
providing innovative LED products: Color Kinetics, Litepanels, Mole-
Richardson, Gekko Technologies, Element Labs, Kino Flo, Zylite, Nila, LEDz
among others.
Challenges however remain in this new technology. For a cinematographer it
is all about the light. If a fixture’s light characteristics aren’t correct for the
scene, it will be passed over. Cinematographers strive for clean edge shadows or
soft light sources that display diffuse shadow lines. The LED will have to deliver
light as good as or better than existing tools provide. Energy savings alone will
not be the reason to embrace the LED.
The introduction of the LED has presented lighting designers with somewhat
of a predicament as it pertains to motion picture lighting. The LED is a point
source much like Thomas Edison started with back in the 1880s. Edison’s source
was wrapped in a clear glass envelope. One bulb alone provided little light.
Numerous light bulbs needed to be combined to provide adequate light levels.
The same challenges that the early designers of lighting instruments dealt with
are now being addressed again in the twenty-first century. Given the number of
patent filings the U.S. patent office has seen over the LED, you would think that
Thomas Edison had never existed. We are essentially reinventing the light bulb.
Litepanels – www.litepanels.com
Product names: Micro Series, Mini-Plus Series,
1 x 1, 2 x 2, 4 x 4, Ringlite Series, SeaSun
One of the first successful companies to exploit the advantages of LEDs was
Litepanels. Founded by industry gaffers, they have provided innovative lighting
tools for on camera lighting as well as more general set lighting applications.
Their battery operated Micro, Mini-Plus and Ringlite products are designed to
mount onto the camera as eye-lights or fill light. Their 1 x 1 fixtures provide a
lightweight lighting tool that can operate on battery power or AC. By carefully
binning their LEDs they have been able to provide a high CRI light quality
essential for good imaging.
Nila – www.nila.tv
Product names: Nila JNH series
Founded by gaffer/grip Jim Sanfilippo, Nila offers a high powered LED
lighting system consisting of a module that interconnects to form larger fixtures.
Interchangeable lenses vary the beam angles in 10-, 25-, 45- and 90-degree
increments as well as vertical and horizontal Elliptical beam options. Other
accessories such as yokes and gels flesh out the system. The module is dimmable
through a DMX protocol and is available in either daylight or tungsten
equivalents. The units operate on a universal 90-240 AC/DC power supply.
Zylight – www.zylight.com
Product names: Zylight IS3, Z90 & Z50, Remote using Zylink protocol.
Founded by Charlie Collias a veteran of video and documentary production
and his brother Jim from an electrical engineering background, Zylight produces
a range of color-changing RGB portable lights well suited for on-camera and
general studio lighting applications. An innovative wireless remote-control
system, ZyLink offers control over numerous fixtures at one time. The units
operate on AC or DC power. Given the high density of LEDs on the light
emitting surface the shadow characteristics are very clean not multiples of
shadow lines.
LEDz – www.led-z.com
Product names: Mini-Par, Brute 9, 16, and 30.
Founded by veteran HMI designer Karl Schulz, LEDz offers a range of small
portable lighting instruments.
The Mini-Par is a 12VDC on-camera light that offers various beam angles
using a set of accessory lenses. The Brute fixture family consists of fixtures from
30 watts, 50 watts, and 90 watts. The fixtures are available in Daylight, 5500°K
or Tungsten 3000°K.
An Introduction to Digital Terminology
ANALOG – The natural world is analog. Light and sound are described as
waves whose shape varies with their amplitude and frequency in a
continuously variable signal. An analog signal is understood to be formed by
an infinite number of points. Analog human vision perceives light as a
continuous gradient and spectrum from black to white. Film is an analog
medium and can record a continuous spectrum. Digital video cameras
capture a scene as analog voltage and then digitize it with an A/D (analog-to-
digital) converter to create linear digital code values.
A/D CONVERSION – Analog-to-digital conversion transforms analog data
(such as light intensity or voltage) into a digital binary format of discrete
values. Referred to as digitization or quantization.
ANAMORPHIC – An optical process in which a widescreen (wide aspect ratio)
image is recorded onto a narrower target (film or sensor) using an
anamorphic lens to squeeze the image horizontally. An anamorphic lens will
squeeze a 2.40:1 ’Scope scene onto a 1.33:1 negative frame or 4 x 3 camera
chip. The format uses almost the entire image area with no waste (of pixels
or negative area), resulting in a higher-resolution image. For display, an
anamorphic projector lens is needed to unsqueeze the image.
ARTIFACT – A flaw or distortion in an image—a result of technical limitation,
incompatibility or error. An artifact can be introduced at any step in which
an image is altered or converted to another format.
ASPECT RATIO – Ratio of screen width to height.
Common ratios:
1.33:1 (4 x 3) Standard TV—or
35mm Full Aperture (Silent)
1.37:1 Academy Aperture—or
Regular 16mm Full Aperture
1.66:1 European Theatrical Standard
1.66:1 Super-16mm Full Aperture
1.78:1 (16 x 9) HDTV
1.85:1 American Theatrical Standard
2.40:1 Anamorphic or ’Scope
BAKED IN – Changing or re-recording recorded images to integrate a
particular look. This action limits options in postproduction but insures that
the look is preserved.
BANDING – A digital artifact caused by digital processing in which lines or
bands appear in an image that previously displayed a smooth unbroken
gradient between light and dark or two different color values. Banding can
be caused by the down-conversion of the bit-depth or sampling ratio of an
image that results in a loss of data.
BANDWIDTH – Literally, the range between the lowest and highest limiting
frequencies of an electronic system. Commonly used to refer to the size of
the “pipeline” employed to transmit data, quantified by the amount of data
that can be transmitted over a given period of time, such as megabits per
second. Compression techniques are used to reduce the size of image files to
facilitate real-time display of high-quality images in systems with limited
bandwidth.
BAYER PATTERN – A chip design that allows a camera to record full color
using a single chip. The pixel array employs a pattern of 2 x 2 matrices of
filtered photoreceptor sites—two green, one red and one blue. Three-chip
cameras record red, green and blue on three separate chips. (See Figure 7.)
BINARY CODE – The mathematical base of digital systems that uses
combinations of only 0 and 1 to represent all values. A mathematical
representation of a number to base 2.
BIT – The smallest increment of digital information. A bit is a digit of binary
code which can define only two states—0 or 1, on or off, black or white.
BIT DEPTH – Bit depth determines the number of steps available to describe
the brightness of a color. In a 1-bit system, there is only 0 and 1, black and
white. An 8-bit system has 256 steps, or numbers from 0–255 (255 shades of
gray). Until recently, 8-bit was standard for video and all monitors. Most
monitors still have only 8-bit drivers, but many HD video systems support
10-bit signals through their image-processing pipelines.
A 10-bit system has 1024 steps, allowing more steps to portray subtle tones.
A linear representation of light values, however, would assign a
disproportionate number of steps to the highlight values—the top range of
512–1024 would define only one f-stop, while leaving 0–512 to define all
the rest. A logarithmic representation of code numbers, however, gives equal
representation across the full dynamic range of film negative—in 10-bit
space, 90 code values for each f-stop. This allows for more precision to
define shadow detail. For this reason, 10-bit log is the standard for recording
digital images back to film. Some publishing color applications use a 16-bit
system for even more control and detail. The cost of the additional bits is
disk space, memory, bandwidth and processing time.
The consequence of too few bits can be artifacts—flaws in the image
introduced by some image processing. Some artifacts include banding,
where a smooth gradient is interrupted by artificial lines, and quantization,
where a region of an image is distorted. If image data is recorded or scanned
in 10-bit color, converted to an 8-bit format for postprocessing, then
converted back to 10-bits for film recording, image information (and usually
quality) is lost and cannot be retrieved. Whenever possible, it is preferable to
maintain all of the image data and not discard information through
conversion to a more limited format.
BLACK STRETCH – Flattens the curve of the toe to put more steps or values
in the lower part of the tone scale. Reveals more shadow detail.
BYTE (B) – A digital “word” composed of 8 bits which can describe 256
discrete values.
• KB = Kilobyte (100 bytes)
• MB = Megabyte (1,000 bytes)
• GB = Gigabyte (1 million bytes)
• TB = Terabyte (1 billion bytes)
CALIBRATION – The adjustment of a display device, such as a monitor or
projector, that prepares a device to produce accurate, predictable and
consistent results in accordance with a standard. Calibration is an essential
factor in color canagement and look management. It enables two people, in
two different locations, to look at the same digital image and see the same
look.
CCD – (charge-coupled device) A type of light-sensitive chip used in cameras
for image gathering, either as a single chip or in a three-chip array. Basically
a grayscale device that measures light intensity, a CCD records color in a
single-chip camera by using color filters on each photoreceptor site (see
Bayer pattern). CCDs are analog sensors that convert light intensity into
voltage. An A/D converter then transforms the analog voltage value to
digital data.
CHIP – A device aligned behind the lens that contains an array of photoreceptor
sites, or light-sensitive sensors, for capturing image data. The sensors
convert light intensity into voltage levels, an electrical charge proportional to
the intensity of the light striking it. The voltage is then sampled and
converted to a digital code value—one for each pixel of the image.
CHROMATICITY DIAGRAM – The two-dimensional plot of visible light,
commonly used to represent the three-dimensional 1931 CIE XYZ
colorspace standard. It is effectively a 2-D “slice” out of a 3-D model. The
horseshoe-shaped plot defines the range of human color vision, independent
of brightness. More limited color gamuts (ranges) of media (such as film or
HD video) or display devices can be clearly portrayed in the diagram.
Individual colors are defined as points in the diagram with chromaticity
coordinates (x, y). Color saturation is greatest at the outer rim of the color
plot; the neutal white point area is near the center of the plot.
CHROMATICITY COORDINATES – (x, y) Chromaticity coordinates define
specific color values in the CIE XYZ color space. They describe a color
quality (hue and saturation only) independent of luminance (brightness
value).
CIE XYZ COLOR SPACE – The color space originally defined in 1931 by the
Commission Internationale de L’Eclairage. (The CIE standard has been
updated, and is currently being re-evaluated.) CIE XYZ uses a three-
dimensional model to contain all visible colors. It defines a set of three
primaries (specific red, green and blue colors) and a color gamut to describe
the range of human vision.
Figure 1. Triangles within CIE chromaticity chart define
limits of color gamuts
CINEON FILE – a 10-bit log image file format based on film density.
Designed to capture the full dynamic range of the motion picture negative, it
is an industry standard for film scanning and recording.
CLIPPING – The total loss of image detail at either end of the scale—highlight
or shadow. Clipping occurs when the exposure moves beyond the threshold
determined by the capability of a camera or recording device. On a
waveform monitor the exposure thresholds for clipping are usually 0 and 100
IRE.
CMOS (Complementary High-density Metal Oxide Semiconductor) – A
category of chip with an array of photoreceptor sites. Its integrated circuitry
has both digital and analog circuits. Unlike CCDs, where the charge is
actually transported across the chip and read at one corner of the array,
CMOS chips have transistors at each pixel that amplify and move the charge
using traditional wires. This approach is more flexible because each pixel
can be read individually; however, CMOS chips are less sensitive to light
because many of the photons hit the transistors instead of the photodiode.
The distinguishing characteristic of the CMOS chip is that it allows a large
number of camera processing functions while using less power and
generating less heat than other types of chips. It can perform at various
frame rates and perform either progressive or interlaced scans. The Arri D-
20 uses a CMOS chip, while the Panavison Genesis uses a CCD chip.
CMY (cyan, magenta, yellow) —The additive secondary colors. Combining two
additive primary colors (R, G, B) produces an Additive Secondary color.
CODEC – A compression/decompression algorithm. The software that executes
the compression and decompression of an image.
CODE VALUE – The digital representation of the brightness and color of a
pixel.
COLOR DECISION LIST (ASC-CDL) – Designed by the ASC as the
beginning of a universal color-grading language, the CDL is a standardized
limited expression of how an original image has been graded in digital post.
Currently addressing only lift, gamma and gain parameters, the CDL can
facilitate working in multiple facilities on the same production, and allow
basic color-correction data to be interchanged between color-correction
systems from different manufacturers. Within the areas addressed, using
color-correction systems that accommodate the CDL code, the CDL will
process an original image and produce the same results on different systems
and in different facilities. It is a useful tool for the cinematographer in look
management.
COLOR GAMUT – The range of colors a system can display or record. When
a specific color cannot be accurately displayed in a system, the color is
considered “out of gamut” for that system. The standard HD video color
space, defined by ITU-R BT.709 (aka. Rec. 709), is an example of a device-
dependent color space. It uses a YCrCb space, which separates luminance
(Y) and chrominance (color – described by Cr and Cb), and allows for color
subsampling, another means of compression.
One method to compare media and devices is to plot their color gamuts on
the chromaticity diagram. The CIE XYZ color space is the standard device-
independent reference. Although the color space uses a three-dimensional
model to represent its colors, it plots the visible colors on a flat chromaticity
diagram, actually a 2-D “slice” taken of the 3-D model, usually represented
as a colored horseshoe-shaped area, leaning left on a 2-dimensional x-y axis.
All colors visible to human perception are plotted on the graph. The colors
of the spectrum lie along the horseshoe curve, left to right, blue to red. Red,
green and blue primaries are specified. The white point is nominally located
where the three primaries are equal in contribution, but to accommodate
different color temperatures, their respective white points are plotted along a
curve across the center area of the model. Specific colors are identified in the
2-D space by a set of two numbers, called chromaticity coordinates (x, y),
which specify their position on the diagram.
COLOR MANAGEMENT – Maintaining the accuracy and consistency of
color throughout the digital workflow from capture to final release.
COLOR SAMPLING – Describes the precision of the measurement of light
and color by a camera system. It is represented by three numbers, separated
by colons, and refers to the relative frequency of measurement (sampling) of
color values. The first number represents luminance (Y, commonly
associated with green) and the second two numbers represent chroma or
color as red and blue difference values (Cr, Cb).
4:4:4 captures the most information, sampling color at the same frequency as
luminance. 4:2:2 is the current standard on HD production cameras. The
color information is sampled at half the frequency of the luminance
information. The color precision is lower, but is adequate in most production
situations. (Human vision is more sensitive to brightness changes than it is to
color variation.) Problems can arise, however, in postprocessing, such as in
the compositing of greenscreen scenes, where color precision is important. It
is recommended that any visual effects shots that require image processing
be recorded in a 4:4:4 format.
Sampling can also be understood as a type of compression of image data:
4:4:4 sampling is uncompressed, 4:2:2 sampling compresses color values by
a factor of 2, 4:1:1 compresses color 4:1.
Figure 2. 4:2:2 color sampling
COLOR SPACE – A color space is a structure that defines colors and the
relationships between colors. Most color-space systems use a three-
dimensional model to describe these relationships. Each color space
designates unique primary colors—usually three (a defined red, green and
blue)—on which the system is based. The system can be device-dependent
or device-independent. A device-dependent system is limited to the color
representation of a particular device or process, such as film. The RGB color
space that describes film is representative of the range of colors created by
the color dyes used in film. A device-independent color space defines colors
universally, which through a conversion process using a profile, can be used
to define colors in any medium on any device.
The color space designated as the standard for digital-cinema distribution is
XYZ, a gamma-encoded version of the CIE XYZ space. A device-
independent space, XYZ has a larger color gamut than other color spaces,
going beyond the limits of human perception. All other physically realizable
gamuts fit into this space.
COMPONENT VIDEO SIGNAL – A video format in which luminance and
chrominance remain as separate entities and have separate values.
COMPOSITE IMAGE – An image created by the combination of two or more
source images. Compositing tools include matte-creation software, and
chroma-key hardware.
COMPOSITE VIDEO SIGNAL – A video format in which luminance and
chrominance are combined.
COMPRESSION – Compression is the process used to reduce the size of an
image file with the least sacrifice in quality or usable information. One
method used by compression schemes is to analyze image material, and find
and remove any redundancy, such as blue sky with a consistent color.
Compression reduces the size of the image file and as a result lowers the
data rate (and hardware) required to display the file in real time.
Compression can be interframe or intraframe.
In some compression schemes, the original image can be fully reconstituted.
Such a scheme uses “lossless” compression. However, a danger exists with
certain compression codecs that claim to be “visually lossless.” The claim
may be true for the direct display of an original image that requires no
manipulation in postproduction, but if it later needs to be processed for
significant color grading or visual effects, disturbing artifacts may appear
that significantly reduce the quality of the image. Other codecs discard
image information, which cannot be subsequently retrieved. This is
considered “lossey” compression.
COMPRESSION RATIO (uncompressed to compressed) – The ratio that
compares the image file size of the original noncompressed image to the
compressed version. The lower the ratio, the lower the compression, and
usually the higher the quality of the resulting image. A format that uses a 2:1
compression ratio preserves more image information than one that uses a 4:1
compression ratio.
CRT (cathode-ray tube) – The traditional TV monitor. Cathode rays are streams
of electrons produced by the heating of cathode inside a vacuum tube. The
electrons form a beam within the cathode-ray tube due to the voltage
difference created by two electrodes. The image on the screen is formed by
the manipulation of the direction of this beam by an electro-magnetic field
that sweeps across the surface of the phosphorescent screen. Light is emitted
as the electrons strike the surface of the screen.
CRUSHING THE BLACKS – Raises the slope of the toe, which reduces the
number of steps or values in the lower part of the scale. Darker shadow
details are lost in black.
DATA – Information stored in a digital file format. Digital image data is
assumed to be raw, unprocessed and uncompressed, retaining all original
information, as opposed to video, which has been processed and
compressed, thereby losing some information.
DCDM (digital cinema distribution master) – The distribution standard
proposed by DCI: uncompressed and unencrypted files that represent
moving image content optimized for direct electronic playback on a
projector and sound system in a theater. The DCDM’s files contain the
master set of raw files that produce high-resolution images, audio, timed text
(subtitles or captions) and metadata—auxiliary data that can include data to
control room lights and curtains.
DCI (Digital Cinema Initiative) – DCI was founded in March 2002 as a joint
venture of Disney, Fox, MGM, Paramount, Sony, Universal and Warner
Bros. DCI’s purpose was to create uniform specifications for digital cinema
projection, and establish voluntary specifications for an open architecture for
digital cinema to insure a uniform and high level of technical performance,
reliability and quality control.
Some of the specs in the standard: 12-bit, 4K or 2K, JPEG 2000. The DCI
system specification defines a life cycle in which content exists in a
succession of states:
• DSM – Content originates as a digital source master; format not
specified.
• DCDM – Content is encoded into a digital cinema distribution master,
according to the specification (12 bit, 4K or 2K, XYZ color space).
• DCP – Content is compressed and encrypted for transport to the theater
as a digital cinema package, covered by the specification (JPEG 2000).
• DCDM (again) – Content is unpackaged, decrypted and decompressed
at the theater for exhibition.
DCP (digital cinema package) – The compressed, encrypted DCDM (digital
cinema distribution master) When the DCP arrives at the theater it is
unpackaged or decoded—decrypted and decompressed—to make a DCDM
ready for display or projection.
DEAD (dark or bright) PIXEL – A pixel that is either permanently bright (either
white, red, blue or green) or dark (black) on the screen. It can occur in either
recording devices (as cameras) or display devices (as monitors or
projectors). Some cameras have systems to correct for dead pixels, filling in
the space with values from adjacent sensors.
DEVICE-DEPENDENT COLOR SPACE – A color space that is limited by
the performance of a particular device or process, such as film. Film
reproduces a particular range of color (color gamut). The RGB color space
that describes film is representative of the gamut of the color dyes used in
film. It is only accurate for describing colors in the film medium.
DEVICE INDEPENDENT COLOR SPACE – A color space that defines
colors universally. Through a process of conversion or translation specific to
the application, medium or device being used, its values can be used to
specify colors in any medium on any display device.
DIGITAL CONFORM – The process of editing the full-resolution digital
image material to match the EDL (edit decision list) which was created in
the editorial process.
DIGITAL INTERMEDIATE – The conversion of a film image into digital
media for all postproduction processes, including the final color grading, as
an intermediate step for distribution, whether by film or digital media.
Common usages of the term:
• The entire digital postproduction process that comes between principal
photography and the production of the final distribution master.
• The digital version (image files) of the production, used during
postproduction.
• The final color-grading process (which may take weeks for a feature)
that integrates all visual elements and determines the final look of the
production.
• The product or result of the final color-grading process (digital master).
DIGITAL MASTER – The completed output or result of the final color-grading
process, subsequently used to create the distribution masters. Or, the full
digital version of the camera-original production.
DIGITAL SIGNAL – A signal represented by discrete units (code values),
measured by a sampling process. A digital signal is defined by a finite code
value in accordance with a predetermined scale.
Figure 5a. Camera Original
S afety plays a prominent role on any film and television set. It can be front
and center, influencing decisions or a nagging annoyance, getting in the way
of the creative process. Many people forget that the film and television industry
is in fact, an industry. While it takes the artistic input of many different
participants to make a film, the filmmaking process can place the cast and crew
in situations that pose varying degrees of risk.
A production company’s attitude is the starting place for dealing with the
risks. Whether a union or a nonunion production, all the companies involved
have a responsibility to see that their workers are trained and qualified to
perform their specific job. Workers need to have their production’s Code of Safe
Practices communicated to them (sometimes also called Polices and
Procedures), but many production people have little or no experience with issues
of safety, or their productions may not operate with a Code of Safe Practices.
How can you help make a production safer?
Safety is an attitude. Make the conscious decision to not get hurt or sick or let
anyone else get hurt or sick at work.
USE COMMON SENSE
Don’t do things just to save time. If the camera is set up in the parking lane of
a road and there are a number of people crowded into the lane, don’t walk into
traffic to get a lens or something to the camera. Drivers are not worried about
you, and “cinematic immunity” doesn’t protect you from their vehicles. Get the
equipment to the camera as quickly but as safely as possible. (It’s also part of the
AD’s job to help everyone get their jobs done quickly and safely on set.)
PLAN FOR THE JOB
During preproduction, on scouts, and any time shots are being discussed,
make safety a part of the planning. Give qualified people all of the necessary
parameters and listen to suggestions. Consider alternatives based on common
sense, budget, schedule, and personal safety.
LEARN HOW TO DO YOUR JOB
Training to do a job safely and correctly is important. Simple things like
learning to lift and carry equipment properly can save you from pain. Camera
cases and other camera equipment are often heavy and can cause back and leg
problems. Proper lifting techniques and carts will make your job easier, and will
help you reach retirement without any lasting injuries.
LISTEN TO QUALIFIED PEOPLE
Sometimes a camera or lamp position has to be made safe before it can be
used. If the setup wasn’t planned for, it may take more time than is available.
The qualified person should be able to say what it will take to do safely what
was asked for. If the time or money isn’t available, an alternative setup is
needed.
INFORM EMPLOYER OF RISKS
Safety in the workplace is everyone’s responsibility. Inform your employer of
potential risks. It’s okay to question the safety of doing something if there’s the
possibility of an imminent risk to the cast or crew. Have the employer address
the issue so you can return to work.
MICROWAVE TRANSMITTERS
Many film and television productions use microwave transmitters to send
video signals from the camera to remote monitors for viewing. A microwave
transmitter is basically a radio at a high-frequency spectrum, and not a
microwave oven. Even so, this equipment will pose a radiation hazard if
improperly handled.
The average microwave transmitter is rated at 0.25W (+24dBm) nominal RF
power output and is designated an intentional radiator. It can deliver video and
audio signals over short ranges when used with a receiver and appropriate
antennas in either fixed or mobile applications. When the transmitter is operating
as an antenna, the system is emitting radio frequency energy, so safe operating
procedures must be observed. On set, your microwave technician is the go-to
individual for this information.
The microwave tech should know and be able to answer the following:
1. How much power is the system radiating?
2. What frequency is being transmitted?
3. How close of proximity is the system to your person?
EXPOSURE
Exposure is based upon the average amount of time spent within an
electromagnetic field (RF energy) with a given intensity (field intensity in
mW/cm2). There are two categories of exposure situations;
occupational/controlled and general population/uncontrolled.
Occupational/controlled
These are situations in which persons are exposed as a consequence of their
employment, provided those persons are fully aware of the potential for
exposure and can exercise control over their exposure. These limits apply in
situations when an individual is transient through a location where
occupational/controlled limits apply provided the individual is made aware of
the potential for exposure.
General population/uncontrolled
These are situations in which the general public may be exposed, or in which
persons that are exposed as a consequence of their employment may not be fully
aware of the potential for exposure or cannot exercise control over their
exposure.
Exposure may be controlled by observing the FCC-compliant safe distances
and remaining beyond those distances from the antenna at all times when the
transmitter is operational. At no time should the user remain within a distance
less than the indicated safe distance for a period greater than 30 minutes.
When setting cameras in an urban environment like New York City, Chicago,
or Los Angeles, care should be taken in the placement of microwave transmitters
and antennas. Many large cities use their rooftops to stage their repeaters for data
and voice over IPs. Therefore, cameras should not be placed next to an active
antenna stationed on these roofs. The production location manager should have
this indicated on their surveys. Also, many rooftops will have warning signs
posted.
Common sense should be factored into the use of microwave transmitters and
antennas. FCC guidelines addressing a variety of frequencies can be accessed on
the Internet at http://www.fcc.gov.
The industry has come a long way when it comes to safety. Today’s training
and procedures are changing to protect workers more than ever. The continuation
of this trend takes a safe attitude. The work you’re doing may be important, but
is it worth getting hurt for?
Industrywide labor management safety bulletins may be found and
downloaded from the Contract Services Administration Trust Fund at
www.csatf.org.
Kent H. Jorgensen is the Safety Representative for IATSE Local 80 Grips Union
Vincent Matta is a Business Representative for the IATSE Local 600
Cinematographer’s Guild
Preparation of Motion Picture Film Camera
Equipment
1. Spreader
a. Runners slide smoothly and lock in all positions.
b. End receptacles accommodate the tripod points and spurs, and hold
them securely.
2. Tripods
a. Each leg extends smoothly and locks in all positions.
b. Top casting accommodates the base of the tripod head (flat Mitchell,
ball or other).
c. Hinge bolts that attach each leg to the top casting are adjusted to
proper tension. Each leg swings easily away from top casting and remains
at selected angle.
d. Wooden Tripods (baby, sawed-off, standard). Legs are solid and have
no splits or breaks.
e. Metal or Fiber Tripods (baby, standard, two-stage). Legs are straight
and have no burrs or dents.
3. Tripod Head
a. Base (Mitchell, ball or other) fits and locks into tripod top casting.
b. Ball base (only) adjusts smoothly and locks securely in any position.
c. Camera lockdown screw fits into camera body, dovetail base with
balance plate, riser or tilt plate.
d. Top plate of head includes a quick-release (touch-and-go) base, which
accommodates a quick-release plate that bolts to camera body or any of the
adapter plates.
e. Eyepiece-leveler bracket and frontbox tripod head adapter on the head
accommodates the leveler rod and frontbox being used.
f. Friction or fluid head:
1. Head balances to neutral position with camera attached. Balance
springs engage and adjust properly.
2. Pan and tilt movement is smooth.
3. Both brake levers lock securely in all positions.
4. Both drag knobs easily adjust the tension of movement from free
movement to the tension required by the operator.
g. Gear head:
1. Head balances to neutral position with camera attached.
2. Pan-and-tilt movement is smooth.
3. Both brake levers engage properly. (Gears may move under stress.)
4. Gears shift smoothly between each speed.
h. Remote head:
1. Head balances to neutral position with camera attached.
2. Head responds immediately and smoothly to pan, tilt and roll
operations.
3. All camera functions are properly driven by remote controls
provided, which may operate focus, iris, zoom, camera motor and
ramping operations. Test that speed, direction and lag adjustments are
accurate.
4. Calibration of all controls is correct or can be adjusted properly. If
not, calibrate as necessary,.
5. Gears mesh properly (on certain heads).
6. All required cables and connectors are present and operate properly.
7. Wireless transmitter and receiver (on certain heads) operate
properly.
8. Head operation does not interfere with video tap signal or add video
noise.
i. Dutch head with third axis: Check all controls, drag knobs, locks,
plates and attachments as with two-axis head.
j. When transporting any type of tripod head, release all locks, and
reduce the pan and tilt tension to 0 (no tension).
k. Carry extra utlilty base plate to adapt to workbench in camera room or
truck.
4. Camera Body
a. Accommodates and locks securely with camera lockdown screw to
tripod head, balance plate, riser, tilt plate and shoulder pad.
b. All rollers in the film path move freely.
c. Camera interior is clean—no emulsion buildup, dust or film chips.
d. Camera oil and grease have been applied to lubrication points as
recommended by camera manufacturer. Clean off any excess. (Frequency
and amount of lubrication vary greatly with cameras. For 35mm, common
practice is to lubricate every 15,000 feet or sooner if squeaking or rubbing
noise is detected. High-speed cameras may require lubrication after each
1,000 feet.)
e. Flange focal depth is set to manufacturer’s specifications. Confirm by
measurement with depth gauge. (See next section.)
f. All fuses are intact and properly seated. Carry spare fuses.
g. Movement of the shutter, mirror, pull-down claw and registration pins
are synchronized. Two tests for shutter sync:
1. Carefully scribe a frame in the gate, then inch the motor back and
forth manually. The film should remain stationary as long as the shutter
stays open.
2. Place a piece of film under the registration pin and inch the motor
movement so that the pin presses against the film. Then inch the motor
movement forward—the shutter should remain closed until the pin
releases the film.
h. The “glow” that illuminates the ground glass is synchronized with the
shutter—the light turns off before the shutter opens the gate. (Check only
on certain cameras).
i. Camera speed holds a crystal speed at all speeds required for the
production. Thoroughly test all external speed-control accessories being
used in the camera package.
j. Shutter speed remains constant. Check by viewing shutter with crystal
strobe gun.
k. External sync control device maintains camera sync with the external
device(s) being used (monitor, computer, projector, or other camera).
l. Pitch and loop adjustments operate properly (certain cameras).
m. Buckle trip (certain cameras) stops camera movement.
n. Camera is quiet (except on MOS cameras). Decibel level is
appropriate to camera used. Listen for abnormal noise while rolling test
film through camera. Test with barney or blimp, if applicable.
o. Heating system operates properly; eyepiece does not fog.
p. All power ports operate and provide appropriate power.
q. Camera has correct configuration for chosen format (such as 3-perf,
Super 35mm, etc.). If the camera was adapted from another format (as from
standard 35mm to Super 35mm), inquire how camera was modified. Some
Arri cameras need only rotate the faceplate and mount; Panavision cameras
require changing the face plate and lens mount, gate, and position of the
movement.
Check the following on any body that has been adapted:
1. Lens covers full frame and aligns to proper center line.
2. Aperture plate or gate.
3. Camera movement.
4. Ground-glass markings—frame lines, center line.
5. Film-counter operation.
6. Zoom-lens tracking—lens coverage and alignment covers full frame
and aligns to proper center line. For alignment, inquire how camera
was adapted—whether by moving lens board (or mount) or adjusting
zoom lens. Confirm by checking tracking.
7. Ground Glass
a. Choice must accommodate the chosen aspect ratio and may include a
combination (as 1.85/TV), and creative preferences of the cinematographer
(as ‘common top’ or shaded areas). Custom ground-glass markings may be
ordered.
b. Seated properly, focuses through viewfinder on grain or texture of
glass. There is a ‘ground’ (textured) side and a smooth side. When correctly
installed, the smooth surface faces the operator. (Remember: ‘smooth
operator.’)
c. Viewfinder glow properly displays ground-glass markings. On some
cameras, the glow mask must be aligned to the ground-glass frame lines.
Check brightness adjustment. If available, select desired color of glow.
d. Test accuracy of framing and center lines. These tests insure that the
lens plane, film plane and ground glass are properly installed, marked and
aligned. Place SMPTE framing test film (RP40 for 35mm) in gate, then
view through a 50mm lens or custom microscope attached mounted in lens
port, and turn mirror back and forth to compare the frame lines and
crosshairs on the film with those on the ground glass.
e. Then shoot a film test of an accurate framing chart or rack leader chart.
A useful rack-leader chart is 11″ x 17, displays the frame lines with arrows
pointing to the corners, Siemens stars (for focus), and the name of the
production, producer, director and cinematographer. (It can be ordered,
purchased or created.)
f. Project film tests of framing to confirm framing accuracy. Compare test
footage of rack-leader chart with SMPTE test film by superimposing or
bipacking the two. Judge accuracy of framing and centering.
11. Viewfinder
a. Ground glass is properly seated. Ground surface faces lens.
Groundglass focus is sharp. Check on focus chart at 2 with 25mm lens. If
focus is soft, have it checked with portable collimator.
b. The image is clear and clean. If necessary, remove ground glass and
carefully clean with proper solvent and lint-free lens tissue. Then reseat
properly (usually with audible ‘click’).
c. Ground glass is marked for the aspect ratios requested by the director
of photography.
d. Eyepiece focuses easily to the eye of the operator. (Adjust diopter until
the grains of the ground glass appear sharp.) Eyepiece focus for average
vision should fall near the center of travel of the focus adjustment, leaving a
range in either direction.
e. Viewfinder extender fits properly between camera body and eyepiece.
Magnifier and ND filter operate properly.
f. Viewfinder extender leveling rod attaches securely to extender and to
bracket on tripod head. Rod extends smoothly and locks in all positions.
g. Viewfinder illumination, or glow, is synchronized with the shutter
(certain cameras).
h. Eyepiece heater warms to comfortable temperature.
12. Lenses
a. Each lens and lens housing is compatible with—and seats securely in
—the mount in the camera body.
b. Front and rear elements are clear and clean, free of large chips and
scratches, or any fingerprints or dirt. Blow off loose material with a blower
bulb. Clean off grease with lint-free lens cloth or tissue moistened with
proper lens-cleaning fluid.
c. Iris leaves fall properly in place as they are closed from the full open
position. Iris operates smoothly through the full range from wide open to
the smallest aperture.
d. Follow-focus assembly mounts properly. Focus gears align and mesh
properly, easily and snugly to the lens gears. There is no delay, ‘lost motion’
or ‘slop’ when starting or changing direction of adjustment of focus or iris
setting. Check both directions of adjustment. (Particularly important with
standard geared or remote servo operation).
e. Lens focus-distance markings are accurate. (See Lens Focus
Calibration Test below.)
f. Telephoto extenders and wide-angle adapters are adjusted to match and
fit the lens intended for their use. If used with a zoom lens, check zoom
back focus with the extender and/or adapter attached.
g. Specialty lenses: Each lens properly performs the effect for which it
was designed. Includes fisheye, periscope, swing/shift, slant focus,
Lensbaby, and other lenses designed for specific uses.
h. Remote focus and iris control:
1. Gears mesh between lens barrel and motor.
2. Focus and iris adjustment operate smoothly through their full range
of motion.
3. There is no ‘lost motion’ when starting or changing direction of
focus or iris adjustment.
4. Remote control maintains accurate calibration with lens barrel
markings.
5. All cables operate properly. Carry set of backup cables.
6. ‘Smart’ lenses, such as the Arri LDS system and the Cooke/i system
(send aperture, zoom and focus data to an onboard display and external
devices and recorders): Confirm that the system precisely calibrates all
devices to which it is connected and preserves all generated metadata.
16. Filters
a. Both surfaces of each filter are clear, clean and free of major flaws.
b. Filters are the proper size for lenses in package:
1. Filters cover entire image area of each lens being used without
vignetting.
2. Filters cover entire front element of lens and allows the use of the
full aperture of the lens.
3. Filters fit properly into filter holders on lens, lens housing, matte
box, filter tray or separate holder.
c. Filter-mounting accessories accommodate all lenses used, and mount
the number of filters on each lens required by director of photography
without vignetting.
d. The rotating filter stage used for polarizing filters turns smoothly and
locks in any position.
e. Sliding mount for graduated filters moves smoothly and locks in any
position.
f. Prepare labels for each filter (tape or Velcro) for display on the side of
the matte box.
g. Use a circular polarizer for cameras in which polarization can interfere
with or darkens the viewing system or videotap. Ask technician and test
viewing system with filter.
18. Magazine
a. Fits snugly into the camera body.
b. Magazine doors fit and lock securely.
c. On coaxial magazines, label each ‘Feed’ and ‘Take-up’ door with tape.
d. Throat, film channels and interior are clean, clear of dust and film
chips.
e. Loop adjustment operates properly (certain cameras).
f. Magazine gear timing is properly adjusted—film runs smoothly and
quietly through the magazine.
g. Clutch tension and friction-brake tension have been measured with the
proper tools and are correct.
h. Inspect magazine port seals (top and rear).
i. Magazine operation is quiet (except on MOS cameras). Use a barney to
dampen sound if necessary.
21. Ramping
a. Frame-rate variation affects exposure. Ramping therefore requires
exposure compensation. Determine whether exposure compensation will be
accomplished by adjusting aperture, shutter angle or a combination of the
two.
b. Confirm that the camera and ramping accessories provide the required
functions. Fully test all ramping operations planned for the production.
c. Shoot film to insure consistent density exposure during ramping.(See
Film Tests section.)
22. Accessories
a. Each accessory properly performs the function for which it was
designed. All cables, power supplies and auxiliary support equipment are
present and operational. Test and operate each device.
b. External speed control holds a crystal speed at all speeds required for
the production.
c. External sync control maintains sync, or is properly driven by external
device (monitor, computer, projector or another camera).
d. Rain gear protects camera from moisture.
e. Underwater housing seals properly and allows operation of all camera
functions.
f. Camera heater takes proper period of time to warm up camera.
LENS FOCUS CALIBRATION
Methods to evaluate lens focus calibration:
1. Use camera to view charts directly through the lens. Use test procedure
described below to evaluate lenses by checking focus through a range of
distance.
2. Shoot film test of chart using suggested procedure, then process and project
film.
For more precise evaluation, when the equipment is available, the following
two methods are recommended (camera rental facilities should have
performed these tests on all their lenses):
3. Mount lens on projector and project reticle (transparency).
4. Examine lens on MTF machine.
TEST PROCEDURE
1. Prime Lenses
a. 40mm or wider: Set camera lens at 3′, 5′ and 7′ from focus chart. At
each distance, check focus by eye through lens and focus lens visually;
compare with lens-barrel distance markings. For more critical testing, shoot
film tests of each lens.
b. Longer than 40mm: Set camera lens at 7′, 10′ and 15′ from focus chart.
Focus lens lens visually, compare with lens distance markings.
c. All lenses: Focus on distant object to test sharpness at infinity.
2. Zoom Lenses: Use calibration procedure (described for Prime Lenses) at
minimum focus distance, 7′, 12′, and a distant object to test infinity. Test for
several focal lengths, including full wide and full telephoto.
3. Note: Other lens-to-chart distances may be used, as long as the selected
distance is engraved on the lens barrel. The chart should fill the frame as
much as possible.
4. When the eye focus differs from the measured distance:
a. If consistent from lens to lens:
1. Check ground-glass seating. Reseat if necessary.
2. Check lens mount.
3. Check distance measurement technique and measuring device for
accuracy.
4. Have flange focal depth and ground-glass collimation checked.
b. Single discrepancy:
1. Return lens for collimation.
2. If needed immediately, encircle lens barrel with chart tape, focus by
eye, and mark the correct distances.
SCRATCH TEST
Run a scratch test for the camera and each magazine to determine whether
there are any obstructions in the camera or magazine mechanism that might
damage the film. Load a short end of virgin raw stock in the magazine and
thread it through the camera. Turn on the camera motor and run the film through
for several seconds. Turn off the motor. Remove the film from the take-up
compartment of the magazine without unthreading the film from the camera.
Examine the film with a bright light and magnifying glass. If any scratches or oil
spots appear on the emulsion or base, mark the film (still threaded in the camera
body) with a felt pen at the following points:
1. where it exits the magazine feed rollers
2. just before it enters the gate
3. just after it exits the gate
4. where it enters the magazine take-up rollers
Then carefully unthread the film and examine it to determine where the
damage originates. Once the problem area has been identified, check that area
for dust, film chips, emulsion buildup or burrs. Remove smooth burrs with
emery paper, and remove obstructions with an orangewood stick.
Make periodic scratch tests on magazines and camera during production to
avoid damage to the negative.
STEADINESS TEST
Test steadiness of camera movement by double-exposing a test image.
Test at each speed required by the production.
1. Prepare chart, such as: Select a Target pattern to photograph:
a. A simple grid or cross of narrow white tape on a black card.
b. A target device, designed for this test. It mounts securely on a wall and
can be rotated or shifted in place to create the desired offset.
c. A ‘steady tester’ which mounts in the lens mount and provides an
illuminated target locked to the camera body. A rotation of the unit creates
the required offset.
2. Mark start frame in film gate with pen or paper punch. (Only necessary with
cameras that do not run in reverse.)
3. Roll at least 30 seconds of the chart at 50% exposure.
4. Cap lens and rewind film in camera, or rewind film manually in dark room,
and position the marked ‘start’ frame back in the film gate (so as to thread on
the same perforation).
5. Offset chart by the width of the tape (a), rotate the dedicated target (b) or
rotate the steady tester, and proceed to roll for another 30 seconds at 50%
exposure (double-exposing the target).
6. Process and project the film to evaluate steadiness.
7. It is essential that both the camera and the chart target be rock-steady during
exposure of the test.
8. With some cameras, an accessory is available to simplify this test. Called a
‘steady tester,’ it mounts in the lens mount and provides an illuminated target
locked to the camera body. A rotation of the unit creates the offset needed.
DAILY PREPARATION FOR SHOOTING
1. Clean the aperture. Suggested methods:
a. Pull aperture plate and pressure pad.
b. Clean both with chamois, and if necessary, proper solvent.
c. Remove hairs and dust from gate, channels and holes with an
orangewood stick.
d. Remove gels from filter holders and slots.
e. Sight through lens to check gate. (Possible only with 40mm or longer.)
2. Clean dust and chips from film chamber. Avoid blowing material back into
gate or into camera movement.
3. Warm up the camera:
a. Run the camera for several minutes without film.
b. In cold situations, run the camera for the amount of time it would take
to run one full magazine through the camera at standard speed.
c. Load proper film stock in magazines. Panavision offers a heater
accessory that will warm up the camera in a few minutes.
4. Load correct film stock in magazine and label magazine with tape.
5. Prepare slate and camera reports.
6. Record and communicate instructions for telecine transfer regarding both
format and ‘look.’
FILM TESTS
(See pages 289–304).
Film tests are requested by the director of photography. Following is a list of
tests that may be useful in preparation for a production. A standard gray scale
and color chip chart are often used for such tests, as well as models that resemble
the actors in the film to be photographed.
1. Lens sharpness and color: (This is particularly important if older lenses or
lenses of different manufacturers are used on the same production.) Test each
lens to insure consistent sharpness and color from lens to lens. Photograph
the identical subject with each lens and compare on a one-light print.
2. Film stock and emulsion batch: Test each different film stock and emulsion
batch to be used on the production for color balance, actual exposure index
and exposure latitude.
3. Laboratory processing: normal, forced, flashed, bleach-bypass, etc. Test
processing at same film laboratory selected to be used during the production.
This is particularly important for determining the degree of forced
processing, flashing or bleach-bypass effect that is desired.
4. Flashing in-camera: Evaluate levels of flashing for desired effect.
5. Filters: Test the effects of various filters on chosen representative subjects to
facilitate the selection of filter types and grades for the production. For
proper evaluation, use lenses and exposure values anticipated for the
particular effect.
6. Lighting: Test the look of new lighting instruments, color gels and diffusion
materials on selected subjects.
7. Makeup: Test makeup on actors under the lighting conditions planned for the
production.
8. Time-code sync: Test sync with sound and any other other cameras planned
to be used on the production. Process picture and sound, shoot a film test,
and have the production company process the picture and sound and send the
footage through the entire planned postproduction workflow. Screen the
result. This will insure that the time code used in production is compatible
with all procedures, equipment and facilities used in post.
9. External sync box: Test sync with external device driving camera (monitor,
computer, projector or other camera).
10. Ramping: Test ramping precision by shooting a solid even field (such as a
cloudless sky or full-frame gray card) and performing all ramping
operations. View test to evaluate consistency of film density—any shifts
indicate exposure variation.
11. Framing: Shoot framing chart or rack-leader chart. Project test to evaluate
framing accuracy.
TOOLS
A proper set of tools and supplies is essential to the preparation and
maintenance of motion-picture equipment. Although the production company
should provide the expendable supplies, a camera assistant’s personal set of tools
should include most of the following items:
Useful fluids:
Panchro Lens Fluid – cleans lenses
Denatured alcohol – cleans film path
Acetone – cleans metal parts, lenses (does not streak or leave residue), but
damages plastic and paint
Naptha or Lighter fluid – removes adhesive residue
Camera oil – acquire from respective camera manufacturer
Standard Tools:
blower bulb – large (6″)
lens brush – camel’s hair or soft sable (1″) (use only for lenses, keep capped)
magazine brush – stiff bristles (1″-2″)
microfiber lens cloth
lens tissue – lint free
cotton swabs
lens-cleaning solvent
50 flexible cloth measuring tape
lighter fluid
scissors – straight blade, blunt tip (2″)
tweezers forceps – curved dissecting forceps or hemostat
ground glass puller
Arri SW2 – 2mm hex (for variable shutters)
magnifying glass
small flashlight
orangewood sticks
tape – cloth (1″) black, white and colors; paper (1⁄2″) white and colors; chart
(1⁄16) white – for lens barrel markings; Velcro – (1″) white, male and female
chalk – thick, dustless
felt marking pens
‘write-on/wipe-off’ pens – for dry-erase plastic slates
powder puffs – to clean rub-off slates
grease pencils – black and white
pens and pencils
film cores
camera fuses
multimeter
soldering iron
16-gauge solder
solder wick de-soldering spool
folding knife
emery paper – 600 grit – ferric-oxide coated
razor blades – single-edge industrial
rope – nylon line (1⁄8″-10″ long)
camera oil – per manufacturer
“camera grease”
oil syringe and needle – one fine, one wide
bubble level – small, circular
ATG-924 (snot tape)
black cloth – 2 square
set of jewelers screwdrivers
set of hex wrenches (1⁄32″ -3⁄16″ and metric)
combination pliers (6″)
needlenose pliers (6″), miniature (1″)
crescent wrench (6″)
vice-grip pliers (4″)
diagonal cutters (4″)
wire strippers (4″)
screwdrivers (1⁄8, 3⁄16″, 1⁄4″, 5⁄16″)
Phillips screwdrivers (#0, #1, #2)
Arri screwdrivers (#1, #2, #3)
Optional Items
Additional tools are often useful—each assistant collects his or her own
personal set. Following is a list of optional items that many have found to be
valuable:
insert slate
color lily (gray scale and color chip chart)
gray card
electronic range finder
angle finder
electronic tablet or PDA with camera-assistant software
electrical adapters
U-ground plug adapter
screw-in socket adapter
WD-40 oil
assistant light
compass
depth-of-field charts
depth-of-field calculator
footage calculator
circle template – for cutting gels
extra power cables
magnetic screwdriver
variable-width screwdriver
wooden wedges – to level camera
small mirror – to create a highlight
dentist’s mirror – aid in cleaning
alligator clips
graphite lubricant
3⁄8″ x 16 bolt – short and long
2 one-inch C-clamps
black automotive weather stripping
small wooden plank – for mounting camera
THE CAMERA ASSISTANT
The position of camera assistant requires a person with a wide range of skills.
The assistant must have technical knowledge of the camera, lenses and a myriad
of support equipment. Production conditions have become more demanding,
with tighter schedules, less rehearsal, faster lenses and smaller depths-of-field.
He or she must be physically fit, capable of total concentration and able to retain
a sense of humor under stressful conditions.
A camera operator for 24 years, Tom Fraser is an active member of the Society
of Camera Operators and a past member of their Board.
Preparation of Digital Camera Equipment
by Marty Ollstein
Factors to Consider
The first and most universal factor to consider is the primary distribution
venue intended for the production at the end of the process. If the production
will be released theatrically and projected on large screens, certain standards and
requirements should be met. The resolution of the image will be an important
factor. However, if the release will be limited to a television broadcast, DVD or
Blu-ray, or online distribution, the requirements are very different.
The postproduction workflow must also be considered. Each post facility has
certain capabilities and preferences. Some cater to higher-resolution data
workflows designed for digital cinema; others work with HD video workflows.
This choice of workflow will influence camera, acessory and format selection. If
the post workflow limits the production image to HD resolution, there may be no
need to consider higher-resolution cameras or formats.
The physical parameters of a production have a natural effect on the operation
of the camera department. Digital-camera studio rigs can be as imposing and
heavy as studio film-camera rigs, yet other camera configurations can fit in the
palm of your hand. Will the production be shot on studio stages or remote
locations? If on location, is there ground floor/elevator roll-in access, or must the
equipment be brought manually up narrow stairways? Will the set be well
secured, or will the shooting be done on uncontrolled, crowded city streets? All
of these issues inform the choices of camera, equipment and shooting style.
Of course, the creative look of the production should be a significant factor
influencing all camera decisions. The look is decided by the cinematographer, in
support of the director’s vision, and in collaboration with the other creative
department heads, including the production designer and visual-effects
supervisor.
Last but not least, budget affects everything on a production, and camera is no
exception. If the budget is modest but the director’s vision is ambitious, the
cinematographer must find a way to get the look the director wants without the
equipment it might normally require. If the cinematographer understands the
concepts and devices involved—what they do and how they do it—he or she will
be better equipped to devise shortcuts, cut corners and improvise solutions
without sacrificing quality or substance.
Look-Management Prep
The consistent use of a comphrehensive look-management system can keep
the production process organized and insure that the look of the resulting image
preseves the creative intent of the cinematographer and director. Defining the
intended look is the first step, and a good strategy can be to create a series of
“hero” frames based on scouting footage, screen tests of actors, or photography
shot specifically for this purpose. Once converted into format compatible with
the production’s look-management system, the gathered images can be
manipulated with the image-processing software to achieve the desired look; the
resulting hero frames are then saved (along with the software recipe used to
create them) in a format that can be shared with predictable quality. Calibrated
displays and proper viewing environments are necessary for this process to
work.
The choice of camera system may determine many issues associated with the
look, but many other choices may remain, including the recording medium (tape,
hard drive or solid state), color space and format, and recording bit depth and
sampling ratio. Choices of aspect ratio, frame rate and shutter angle must also
still be made. Each decsion narrows the selection of available equipment, and
provides the crew with the information needed to do a thorough preparation.
B. CHECKOUT
Inventory - Master Production Equipment List
Once the decisions have all been made, the cinematographer develops the
master list of equipment with the camera crew. Part of this list mirrors a film-
shoot list, including items such as camera support (tripod, head, etc.), lenses, and
many of the accessories and expendables. This makes sense, since some digital
cameras (such as the Arri Alexa) are designed to be identical to a film camera on
the front end—up to the sensor.
A generic digital camera inventory checklist:
1. Camera system: Camera body, videotape “magazines,” onboard disk drives,
onboard solid-state drives, all types of cables, EVF (electronic viewfinder),
optical viewfinder (when available).
2. Camera support: Hi-hat, spreader, tripods, tripod heads, adapter plates,
handheld rig.
3. Lenses: Lenses to cover (expose) the full area of the camera’s sensor size.
4. Camera accessories: Matte boxes, follow-focus unit, “smart-lens”
accessories, zoom motor, rods and adapters, filters, all remote controls.
5. Power: A/C adapter, batteries, power cables.
6. Digital Support: Monitors (onboard, handheld, viewing, reference,
wireless), recorders (digital videotape, disk drive or solid state), backup
systems (appropriate to medium), look-management tools (LUT box, color-
correction console, workstation and software).
7. Scopes: Waveform, vectorscope, histogram, combination scopes.
8. Media: Enough digital tape cassettes of correct format (as HDCAM-SR),
onboard hard-disk drives, onboard solid-state drives, flash memory cards or
sticks.
9. Expendables: Supplies for assistant (varieties of adhesive tape, markers,
camera reports, USB flash drives, etc.).
10. Tools: Portable collimator, Sharp Max, multimeter, monitor probe, light
meter.
11. Test charts: Color chip chart (digital), gray scale, framing, resolution.
12. Balls: White ball with black hole (the “Stump” meter), set of balls for
modeling lighting (mirrored, gray, white).
In-House Preparation
The staff at a rental house maintains, services, repairs and tests all equipment
that passes through the facility. When a package is prepared for checkout, the
items on the production’s order are assembled, and final adjustments and settings
are made before presenting the equipment to the client. All equipment delivered
to a checkout bay should be in working order and perform according to technical
specifications.
In the course of servicing equipment to prepare it for production, many
measurements and tests are made by the engineers and prep techs. Some rental
houses log the results of these tests and make them available to their clients.
Otherwise, the camera crew may ask the prep tech about any measurement,
setting, test or modification to confirm that it has been done, and if it has, request
the exact results.
If a test needs to be made or repeated, the crew may ask the prep tech to have
the rental house do it, or to assist them in performing the test in the prep area.
Some diagnostic and preparation procedures require specialized test equipment.
Although many of the preparation steps listed below may have been
performed by a rental-house technician, it is good practice for the crew to cover
each step themselves, so as to thoroughly check and test the camera firsthand.
Crew Preparation
There are many aspects involved in a thorough prep of a digital camera
system. This guide organizes the procedures into the following categories:
1. Universal Prep Procedures
2. Digital Camera Prep Steps
3. Setting up the Menu
a. Format Settings
b. Image and Color Settings—Scene Files and Looks
A. Format Settings
As discussed above, many choices must be made prior to equipment checkout.
Based on the factors discussed, including distribution venue, post workflow,
creative look, budget and physical conditions of the shooting, the key decisions
are made. These decisions include the selection of the camera system and all
other key choices that define how the camera will record the image.
Some cameras record in only one mode (one format). Others offer choices of
modes, formats and settings. When there are choices, the camera menus must be
set up to record in the chosen format. Setting up the menus is a fundamental part
of the preparation for production.
Step through the menus, make the selections and adjust the settings as ordered
by the cinematographer. The menus and options vary widely by camera system.
The following list offers a brief description of each parameter that may need to
be set.
1-2-3.: Select the color space, bit depth, and sampling ratio.
1. Color space (RGB, YUV or YCrCb):
a. RGB provides full three-channel (red, green and blue) color
information, comparable to three-layer emulsion film.
b. YUV or YCrCb separates luminance (Y = light intensity or
brightness) from chrominance (U and V = color information). This
separation allows for color subsampling, in which luminance can be
measured at a different frequency (more often) than chrominance. A color
space is often associated with a particular bit-depth and sampling ratio.
2. Bit depth(8-bit, 10-bit, 12-bit): 10-bit depth provides more code values than
8-bit depth to use in representing light values. The 10-bit scale is 0–1023; the
8-bit scale is 0–255. The difference shows in tone subtlety. Higher bit depth
comes at a cost—the additional tape, drive space or digital memory required
to record and store the bits, and time in post to process them.
3. Sampling ratio (4:4:4, 4:2:2): The ratio of the frequency of the sampling
(measurement) of luminance (the first integer) to chrominance (the second
two integers) on odd and even scan lines. “4” represents full bandwidth—the
maximum information that can be sampled.
Time code: Record run, continuous run, time of day, jam sync, genlock
There are several different strategies for recording time code on a digital
camera. It is important to consult the post house and sound mixer to insure that
the method selected is compatible with all systems that will be used on the
production. To fully test the time-code system selected and confirm
synchronization between camera and sound, the sound mixer should participate
in a test using all sound and time-code hardware that will be used for the
production. This may include a sound recorder, a clockit-type time-code
generator and a smart slate that displays time code.
Useful targets:
Color-chip chart designed for digital video.
Focus chart with Siemens star, resolution line pairs.
Rack-framing chart.
Middle-gray card.
Grayscale steps.
Human models that resemble principal cast.
Useful tools:
Waveform monitor.
Vectorscope
Histogram
Light meter
Test balls/spheres (middle gray, white, mirror, white with black hole).
1. Workflow
The most important test to perform in preproduction is a practical image test
of the entire workflow—end to end, “scene to screen.”
a. Record images representative of the planned production
photography. Include charts (color, focus, framing), models, motion, and
lighting setups approximating the dynamic range anticipated.
b. Use the same equipment (camera, lenses) and media ordered for
production.
c. Set up the camera and all supporting devices exactly as they will be
set for production. If multiple configurations or settings are anticipated,
shoot takes with each variation, clearly slating each one. This may
include frame rates, scene files, recording configurations and any
alternate looks achieved by changing menu settings.
d. Record time code and audio as planned for production.
e. Send the recorded material through the same workflow planned for
the production. This may include transfers, conversions, processing by
different software applications, and recording out to the final release
medium. If the project is destined for a theatrical film release, this will
require recording the material back to film.
This test requires the full cooperation of the post facility. Most will
welcome the opportunity to test and confirm a production’s workflow
before production begins. Just as the camera settings must be the same as
planned for production, all settings, selections, devices and media used in
the postproduction process must be the same as planned for the
production.
f. If the result of the test is unsatisfactory, the cause must be identified
before proceeding to production.
2. Exposure/dynamic range
Just as a cinematographer tests a new film stock—or even a new batch
number—for dynamic range, he or she should test a digital camera for its
response to light. The result of this test can inform the cinematographer’s
lighting style. Knowing what happens to the image at certain levels of under- or
overexposure allows the cinematographer to take full advantage of the camera’s
capabilities and push them to their limit.
a. Use charts and a human model (the eye is most sensitive to exposure
changes on a person’s face).
b. Shoot identical takes at 1-stop intervals, up to 5 stops under and 5
stops over the “normal” (middle gray) exposure setting. Add 1⁄2-stop
intervals within 2 stops of normal, for more critical evaluation.
c. Clearly slate and log each take.
3. Framing
Shoot a rack-leader framing chart, with arrows pointing out to each corner of
the frame. Align the eyepiece and ground-glass frame lines with the frame
lines on the framing chart.
4. Time code
Record time code with all the same hardware planned for production—smart
slate, Clockit, any other time-code generator, sound recorder (when
available), and additional cameras. Confirm time code records properly, and
all picture and sound stays in sync.
5. Frame rates
Record takes at each frame rate anticipated for production. For off-standard
frame rates (slow or fast motion), send the footage through post processing,
then view to evaluate the motion effects.
6. Look management
a. Load each of the shoot’s planned looks into the system.
b. Record an appropriate scene for each of the respective looks.
c. Apply each look to the particular scene recorded for its use.
d. Evaluate the transformed scene on a calibrated monitor.
e. Create new scene files, ASC CDL recipes and LUTs as requested,
using camera menus, image-processing software and LUT boxes,
respectively.
7. Traditional Visual Tests
As for a film camera prep, shoot tests of all elements that require visual
evaluation. (See “Film Camera Prep,” page 441.)
a. All lenses and filters
b. Specialized lighting effects.
c. New lighting color gels and diffusion.
d. Makeup, sets and wardrobe as requested by production.
D) DAILY PREP AND MAINTENANCE
1. Set up camera package on the set for the first shot.
2. Power and warm-up camera and all hardware to proper operating
temperature. If available and needed, use camera body and eyepiece heaters.
3. Examine sensor and clean as necessary.
(See “Sensor Cleaning Steps” section, page 451)
4. Clean all lenses and filters as necessary.
5. Check for any lit or dead pixels.
(See “Pixel Check Steps” section, page 450)
6. Run auto black balance (ABB) enough times to process all pixels on sensor.
7. Check for adequate supply of media (tapes, drives, cards) for the day’s
shooting. Prepare backup devices and media for use.
8. Check calibration on all monitors and adjust as necessary using field
calibration procedures. (See below.)
The calibration of a CRT is affected by the earth’s magnetic field—it must be
checked each time it is moved to a different location.
2. Using Focus chart (with Siemens star) or Pituro chart and tape measure
a. Set camera at such a distance that the chart fills (or nearly fills) the
frame and the lens being checked has a witness mark for that distance.
c. If zoom lens, zoom in to maximum focal length.
d. Adjust eyepiece diopter.
e. Raise peaking circuit in viewfinder.
f. Set light intensity on chart for optimum viewing.
g. Adjust back focus with back-focus ring on lens (using steps j, k, l, m
above).
E. DELIVERY OF ELEMENTS TO POST DAILIES,
EDITORIAL, VFX AND DI
1. After consultation with production and postproduction personnel, decide
which elements will be delivered to which location at the end of each
production day.
2. Plan and schedule the daily delivery method, paying particular attention to
the integrity and safety of the image media (avoiding unnecessary heat and
movement).
3. Establish a clear labeling method, recognized by the postproduction
personnel, that organizes the media and facilitates easy finding of requested
items.
Have a happy prep!
A feature cinematographer and director, Marty Ollstein conceived and
developed Crystal Image software, which precisely emulates optical camera
filter effects. He is a member of the ASC Technology Committee, a SMPTE
Fellow, and participated as a cinematographer in the ASC-PGA Camera
Assessment Series (CAS) and Image Control Assessment Series (ICAS).
* Some differences from film camera prep.
Camera-Support Systems
by Andy Romanoff, ASC Associate Member Frank Kay, ASC Associate Member
and Kent H. Jorgensen
Camera Section
Models: 35-3P (newest, optimized for quiet operation in 3-perf.); 35-III, 35-II
(original)
Weight: 16 lbs/7 kg with 400′ (122m) load and 12V DC onboard battery.
Movement: Single pull-down claw which is also the registration pin (steady to
1
⁄2000th of image height). Spring-loaded side pressure guides. Adjustable
pitch. 3 or 4 perf.
FPS: 35-3P: Sync speeds: 24, 25, 29.97, 30 fps. Built-in var crystal control to 2
to 40 fps in 0.001 increments.
35-III: 3–32 fps crystal-controlled adjustable in .001 fps increments via
mini-jog wheel. 24, 25, 29.97, 30 fps sync speeds. Internal phase shift
control for TV bar elimination
35-II: 24, 25, and 29.97 or 30 fps. Variable speeds 6–32 fps. Maximum
speed with external speed control is 32 fps, with 180° shutter only.
Aperture: .732″ x .980″ (18.59mm x 24.89mm).
dB: 4-perf 30dB. 3-perf 30dB. 26dB with barney. 35-3P: 3-Perf: 26dB. 4-Perf:
30/33dB
Aaton Magazine Diagrams
Figure 1a & 1b. Top: Position of film before exposure form
24-25 hole loop by placing a 2 core between front of
magazine and film while lacing and make equal top and
bottom loops. Bottom: Position of film after exposure. Film
takes up emulsion side in
Displays: LCD Display, speed selection, remaining footage, ISO selection,
battery voltage, time and date, full AatonCode readout via a single rotating
jog wheel. Warning for speed discrepancy, misloading and low battery.
Camera shutoff is automatic at end of roll.
Lens: Interchangeable Lens Mounts: Arri PL, Panavision, Nikon. User
adjustable for Standard or Super 35.
Shutter: Mechanically adjustable mirror using shutter tool: 180°; 172.8°; 150°;
144°.
Viewfinder: Reflex Viewfinder. Eyepiece heater. Optional anamorphic viewing
system.
Video: Integrated CCD Color Video Assist: NTSC or PAL; flicker-free at all
camera speeds. Also black-and-white model with manual iris. Film camera
time code, footage, on/off camera status are inserted in both windows and
VITC lines. Built-in frameline generator.
Viewing Screen: Over 16 stock groundglasses; custom markings available.
Aatonlite illuminated markings.
Mags: 400′ (122m) active displacement mag (core spindles shift left to right)
with LCD counter in feet or meters. Mag attaches by clipping onto rear of
camera. Uses 2 plastic film cores.
Accessories: 15mm screw-in front rods below lens. 15mm and 19mm
bridgeplate compatible. Chrosziel and Arri 4 x 5 matteboxes and follow
focus.
Connections: Inputs: Amph9 (video sync), Lemo6 (power zoom), Lemo8
(phase controllers), Lemo5 (SMPTE and ASCII time code). Time recording
with AatonCode II: Man-readable figures and SMPTE time code embedded
in rugged dot matrices. 1⁄2 frame accuracy over 8 hours. Compatible with
film-video synchronizer and precision speed control.
Power: 12 V DC (operates from 10-15v).
Motor: brushless, draws 1.4A with film at 25°C (77°F) Batteries: onboard 3.0
Ah NiMH and 2.5 Ah NiCd.
ARRICAM Lite
Weight: Body: 8.8 lbs/4 kg. Body + Finder: 11.7 lbs/5.3 kg. Body + Finder +
400 (122m) Shoulder Mag: 17.5 lbs/7.95 kg.
Movement: Dual-pin registration and dual pull-down claws, 4- or 3-Perf, low
maintainance 5-link movement with pitch adjustment for optimizing camera
quietness.
FPS: Forward 1–40 fps. Reverse 1–32 fps. All speeds crystal and can be set
with 1⁄1000th precision
Aperture: Full frame with exchangeable format masks in gate. Gel holder in
gate, very close to film plane. Aperture plate and spacer plate removable for
cleaning.
dB: Less than 24dB(A). 3-Perf slightly noisier than 4.
Displays: (On camera left side) main display with adjustable brightness red
LEDs for fps, shutter angle, footage exposed, or remaining raw stock.
run/not ready LED. Extra display camera left and right (studio readout) with
studio viewfinders.
Push-button controls for setting fps, shutter angle, display brightness,
electronic inching, phase, footage counter reset.
Lens: 54mm stainless steel PL mount, switchable for standard or Super 35, with
two sets of lens data system (LDS) contacts. LDS, when used with lens data
box, provides lens data readout as text on video assist or shown on dedicated
lens data display. Also simplifies ramps and wired and wireless lens control.
Shutter: 180° mirror shutter, electronically adjustable from 0°–180° in 0.1°
increments. Closes fully (0°) for in-camera slating. Tiny motor in shaft
controls shutter opening. Ramps range is 11.2°-180°.
Models: Over 17,000 were made, in various iterations; many still in use with
numerous modifications—especially conversions from original 3 lens turret
to PL mount and updated motors. The 35-2C/B has a three-lens turret, and an
interchangeable motor drive system.
Weight: 5.3 lbs/2.5 kg (body only, no motor, PL mount)
12 lbs./5.5 kg (camera w/200 (61m) mag, without film and lens.)
Movement: Single pull-down claw with extended dwell time to ensure accurate
film positioning during exposure. Academy aperture is standard, with other
formats available.
FPS: The most widely used motor is the Cinematography Electronics Crystal
Base: 1–80 fps in 1-frame increments via push-buttons (1–36 fps with 12V
battery; 1–80 with 2 12V batteries). It puts camera at standard lens height for
rods.
With ARRI handgrip motors: 20–80 fps with ARRI 32V DC high-speed
handgrip motor (over 60 fps may be unsteady or may jam) 24/25 fps with
16V DC governor motor, 20–64 fps with 24–28V DC variable speed motor
8–32 fps with 16V DC variable speed motor Arri Sync Motors (120V)
(240V) for blimps. (120S, 1000) (50/60 Hz)
Aperture: .866″ x .630″ (22mm x 16mm).
Displays: Dial tachometer on camera shows fps; footage indicated by analog
gauge on magazines.
Lens: Originally made with three-lens turret with three Arri standard mounts
(squeeze the tabs, push lens straight in). Later followed by turret with two
standard mounts and one Arri bayonet mount (insert and twist). Hard-front
PL mount modifications widely available.
Notes: A single PL mount evolution of the 2C design. About two dozen made.
Weight: 5.8 lbs/2.6 kg. (body only) 11.7 lbs/5.3 kg. (body, handgrip, 200 (61m)
mag; no lens or film) 13.5 lbs/6.1 kg (body, handgrip, 400″ (122m) mag, no
lens or film.)
Movement: Same as 2C: single pin pulldown claw with extended dwell time to
ensure accurate film positioning during exposure.
FPS: 24/25 fps crystal; 5–50 fps variable.
Aperture: .862″ x .630″ (22mm x 16mm).
Lens: 54mm diameter PL mount. Arri bayonet and standard mount lenses
(41mm diameter) can be used with PL adapter. Flange focal distance of
52mm stays the same.) All zoom and telephoto lenses should be used with a
special 3C bridge plate support system.
Shutter: Like the 2C, spinning reflex mirror shutter, adjustable from 0°–165° in
15-degree increments while camera is stopped. Exposure is 1⁄52nd of a
second at 24 fps with a 165-degree shutter.
Viewfinder: Three doors: fixed viewfinder, offset for handheld, pivoting finder.
Three choices for fixed viewfinder door: regular, anamorphic and video tap.
6.5x super-wide-angle eyepiece.
Viewing Screen: 2C groundglasses.
Video: On fixed door.
Mags: Uses 2C, 35-3 and 435 mags. Some have collapsible cores. Others use 2″
plastic cores .
Accessories: 2C, 35-3 and 435 accessories.
Power: Power input through a 4-pin connector. Pin 1 is negative; Pin 4 is +12V
DC.
Arriflex/SL 35 Mark II
Notes: Earlier lightweight model, uses 2C groundglasses
Weight: 5.3 lbs/2.4kg without magazine.
Movement: Re-manufactured Arriflex Medical 2C
FPS: 1–80 fps, forward and reverse; extra 50Hz speeds 33.333 and 16.666.
Aperture: .862″ x .630″ (22mm x 16mm). Full aperture. Aperture Plate: Non-
removable.
Power: Quartz-controlled DC motor, 24V; 3 pin #1 Lemo B connector; camera
on/off toggle and remote.
Displays: Digital footage/meters; red LED with reset and memory.
Arriflex 35-3
Notes: Small, lightweight MOS camera for handheld, rigs, underwater and crash
housings. About half the weight and size of a 435.
Weight: 3.5 kg/7.7 lbs. (body, viewfinder and eyepiece, without magazine)
Movement: Single pull down claw with two prongs; single registration pin.
Registration pin in optical printer position (like 435). Camera available with
3- or 4-perf movements.
Frame Rate: 1–60 fps forward. 23.976, 24, 25, 29.97, 30 fps reverse (quartz
accurate to .001 fps.).
Aperture: Super 35 (24.9 x 18.7mm) 0.98″ x 0.74″, same as ARRICAM ANSI
S35 Silent 1.33 format mask). Fixed gap film gate.
Display: Operating buttons with an adjustable backlight
Lens: 54mm PL mount, adjustable for Normal or Super 35. Flange focal depth
51.98mm -0.01.
Shutter: Spinning, manually adjustable reflex mirror shutter. Mechanically
adjustable with a 2mm hex driver at: 11.2°, 22.5°, 30°, 45°, 60°, 75°, 90°,
105°, 120°, 135°, 144°, 150°, 172.8° and 180°.
Viewfinder: Reflex viewfinder, can be rotated and extended like the 435.
Automatic or manual image orientation in the viewfinder. Viewfinder and
video assist are independent of each other, so switching to Steadicam or
remote operation is done by simply removing the finder, leaving video assist
on board. No need for a 100% video top. Optional medium finder extender.
Video: IVS color Integrated Video System
Notes: The 535 came first, followed by the lighter 535B. There is no 535A—
just the 535. Main difference is the viewfinder: the 535B is simpler and
lighter. 535 has electronically controlled mirror shutter; 535B shutter is
manual. 535 has 3-position beamsplitter.
Weight: Body only 21.6 lbs/9.82 kg. body + finder 29.4 lbs/14.19 kg., body +
finder + mag (no film or lens) 36.4 lbs/16.55 kg.
Movement: Dual-pin registration conforming to optical printer standards, and
dual pull-down claws. Can be replaced with a 3-perforation movement.
Adjustable pitch control.
FPS: Quartz controlled 24/25/29.97/30 fps onboard; and
3–50 fps with external control such as Remote Unit (RU) or Variable Speed
Unit (VSU). With external control, makes speed changes while camera is
running, and runs at 24/25 fps in reverse. Pushing the phase button runs
camera at 1 fps—but precise exposure not ensured at 1 fps.
Aperture: Universal aperture plate with interchangeable format masks provides
full range of aspect ratios. Has a behind-the-lens gel filter holder. Gels are
positioned very close to image plane, so they must be scrupulously clean and
free of dust. Gate is easily removed for cleaning.
dB: 19dB.
Displays: In-finder displays use LEDs to allow the operator to monitor various
camera functions, battery status, and programmable film-end warning.
Digital LCD tachometer and footage displays: camera left/right; audible and
visible out-of-sync warning; visible film jam; film end; error codes;
improper movement position; improper magazine mounting; disengaged rear
film guide indicators.
Lens: PL lens mount, 54mm diameter, with relocatable optical center for easy
conversion to Super 35. Flange focal distance is 52mm.
Shutter: Microprocessor-controlled variable mirror shutter (535 only; the B is
manual). Continuously adjustable from 11°–180° while running, in .01°
increments, at any camera speed. The Arriflex 535 permits shutter angle
changes while running at the camera or remotely. The 535’s program also
permits simultaneous frame rate/shutter angle effects, such as programmed
speed changes with precise exposure compensation.
Viewfinder: Swing-over viewfinder enables viewing from either camera left or
camera right, with constant image correction side to side and upright. A
selectable beam splitter provides 80% viewfinder-20% video, 50-50 or video
only. Programmable Arriglow for low-light filming. Nine preprogrammed
illuminated formats, an optional customized format module, and fiber-optic
focus screens. Switchable ND.3 and ND.6 contrast viewing glasses, a variety
of in-finder information LEDs, and a 12″-15″ variable finder.
Viewing Screen: Ground glasses and fiber-optic focus screens for all aspect
ratios.
Video: Video Optics Module (VOM): provides flicker reduction and iris control.
Mags: Rear-mounted 400″ (122m) and 1,000″ (300m) coaxial, each with two
microprocessor-controlled torque motors. Feed and take-up tension and all
other functions are continuously adjusted by microprocessors. Mechanical
and digital LCD footage counters built in.
Accessories: Variable Speed Unit (VSU) can attach to the 535 and permits
camera speed changes between 3 and 50 fps, noncrystal.
Shutter Control Unit (SCU): mounts directly to the camera and permits camera
shutter angle changes between 11° and 180° (535 only).
Remote Unit (RU): operational remotely from up to 60″, provides a VSU/SCU
(variable shutter/variable speed) combination. The RU links the SCU and
VSU to permit manual adjustment of the frame rate while the 535″s
microprocessor varies the shutter angle—all to ensure a constant depth of
field and exposure.
SMPTE time code module plugs in to utilize onboard time code generator,
and provides full SMPTE 80-bit time code capability.
Electronic Sync Unit (ESU): Operational remotely from up to 60; provides
synchronization with an external PAL or NTSC video signal (50/60Hz),
another camera or a projector, or computer or video monitor via a monitor
pick-up. It also contains a phase shifter, Pilotone generator, and selectable
division ratio between an external source and the camera’s frame rate.
Camera Control Unit (CCU): provides integrated control over all electronic
functions. External Sync Unit is designed for multicamera, video or
projector interlock.
Laptop Camera Controller is software to control the camera via a serial
cable.
Power: 24VDC. 3-pin XLR connector: Pin 1 is (-), and Pin 2 is +24V.
Arriflex 535 B
Notes: The Arriflex 35BL was conceived in 1966 as the first portable, dual pin
registered, handheld, silent reflex motion picture camera. Its first significant
production use was at the 1972 Olympic Games, where it was employed for
sync-sound, cinéma vérité and slow-motion filming at speeds to 100 fps. At
the same time, its theatrical and television use began, especially for location
work.
The camera evolved. The analog footage and frame rate indicators of the
35BL-1 were replaced by a digital readout on the 35BL-2. With the 35BL-3,
the lens blimp was eliminated. The Arri 41mm bayonet mount was soon
replaced by the larger 54mm diameter PL lens mount.
The 35BL-4 introduced a brighter eyepiece and illuminated groundglass.
The 35BL-4s came out with a new, quieter, adjustable-pitch multilink
compensating movement, new footage/meters counter, redesigned internal
construction, and magazines with an external timing adjustment.
Arriflex BL-4s Threading and Magazine Diagram
Figure 9. Film takes up emulsion side in.
Movement: Industry standard dual pin registration. Two double pronged pull-
down claws on early 35BL-1 cameras for high speed to 100 fps with special
magazine roller arms. Two single prong pull-down claws on all other 35BL
cameras.
35BL-4s movement has an adjustable pitch control.
Aperture: .862″ x .630 (22mm x 16mm), custom sizes available. Aperture Plate
is removable for cleaning.
35BL-3, 4 and 4s gates will fit 35BL-1 and 2 cameras, but not vice versa.
Displays: LED digital fps and footage readout on camera left. Audible out-of-
sync warning. A red LED near the footage counter indicates low footage,
memory, battery. BL-1 has mechanical readout.
Lens: 54mm diameter PL mount. Newer cameras switch from Normal to Super
35.
Early 35BL cameras had Arri bayonet mount. Some cameras were converted
to BNC mount.
35BL-2 and BL-1 cameras require lens blimps for silent operation.
Shutter: Rotating mirror shutter. See table.
Viewfinder: Reflex Viewfinder. 35BL-4s and BL-4 viewfinders are a stop
brighter than earlier 35BL cameras and feature a larger exit pupil. The finder
rotates 90° above and 90° below level with the image upright. Super Wide
Angle eyepiece with manual leaf closure and 6.5X magnification standard on
35BL-4s and BL-4 cameras. Adjustable eyecup allows the operator to select
the optimum eye-to-exit pupil distance. Finder extenders available for the
35BL-4s and 35BL-4 include a 12.5″ standard with switchable contrast
viewing filter, and variable magnification up to 2X. For the 35BL-3, 35BL-2
and 35BL-1: 9″ standard and 9″ anamorphic finder extenders.
Video: Video elbow with Arri and aftermarket video taps from CEI, Jurgens,
Denz, Philips, Sony and many others.
Viewing Screens: pullout with Hirschmann forceps to clean and interchange.
ArriGlow illuminated frame lines.
Mags: 400″ (122m) and 1,000″ (300m) coaxial. The 35BL can be handheld with
either magazine. Mechanical footage counters are integral.
Accessories: Sound Barney and heated barney.
Connections: Electronic Accessories: Multicamera interlock is achieved with
the EXS-2 50/60Hz External Sync Unit. SMPTE time code available for
later models.
Motors, Power: 12V DC. Power input through a 4-pin XLR connector on
camera. Pin 1 is (-); Pin 4 is +12V. Although most of the industry settled on
4 pin connectors on both ends, some cables have 5-pin XLR male connectors
on the battery end.
Arri Accessories
Arri accessories common to most cameras with flat bases—(many 2C cameras
still have with handgrip motors, not flat bottoms):
Rods: There are two diameters of lens support/accessory rods in use: 15mm and
19mm. The 19mm rods are centered below the lens; 15mm rods are off-
center.
Environmental Protection Equipment: Aftermarket rain covers, splash
housings, rain deflectors and underwater housings available.
Camera Support Equipment: Arri Head; Arri Head 2 (newer, lighter, smaller).
Arrimotion (small and lightweight moco).
Lens Controls: Arri FF2 or FF3 follow focus. Preston Microforce or Arri
LCS/wireless lens control. Lens Control: Arri FF3 follow focus. Preston
Microforce zoom control. Iris gears available for remote iris.
Arri Matte Boxes
MB-16 (4 x 4 Studio): two 4″ x 4″ filters and one 41⁄2″ round filter (maximum
of four 4″ x 4″ and one 41⁄2″ round). Swing-away mechanism for fast lens
changes. Can be equipped with top and side eyebrows.
MB-17B (4 x 4 LW): A lightweight matte box holding two 4″ x 4″ filters and
one 41⁄2″ round filter. Swing-away mechanism; can easily be adapted to
15mm or 19mm support rods via the BA bracket adapters. It can also be used
on the SR lightweight rods. It can be equipped with a top eyebrow.
MB-16A (4 x 5.6 Studio): A studio matte box holding two 4″ x 5.650″ filters
and one 41⁄2″ round filter (maximum of four 4″ x 5.6″ and one 41⁄2″ round).
Swing-away mechanism. Can be equipped with top and side eyebrows.
MB-18 (4 x 5.6 Studio): A studio matte box holding three 4″ x 5.650″ filters
and one 138mm filter (maximum of four 4″ x 5.650″ and one 138mm).
Swing-away mechanism for fast lens changes. Can be equipped with top and
side eyebrows. Covers Super 16mm.
MB-19 (4 x 5.6 LW): A lightweight matte box holding two 4 x 5.650 filters and
one 138mm or 41⁄2″ round filter (maximum of three 4 x 5.650 and one
138mm or 41⁄2″ round). Swing away mechanism for fast lens changes and
can easily be adapted to 15mm or 19mm support rods via the BA bracket
adapters. Can also be used on SR lightweight rods. Can be equipped with top
and side eyebrows. Covers Super 16mm.
MB-15 (5 x 6 Studio): A studio matte box holding two 5″ x 6″ filters and one
6″, 138mm or 41⁄2″ round filter. A rotating stage can be attached, adding two
4″ x 4″ filters. Swing-away mechanism for fast lens changes. Can be
equipped with top and side eyebrows. Covers fixed lenses 14mm and up, as
well as most zooms. Geared filter frame.
MB-14 (6.6 x 6.6 Studio): A studio matte box holding four 6.6″ x 6.6″ filters
and one 6″, 138″mm or 41⁄2″ round filter (maximum of six 6.6″ x 6.6″ and
one 6″, 138mm or 41⁄2″ round). The four 6.6″ x 6.6″ filter trays are grouped
in two stages with two filter trays each. The two stages can be rotated
independently of each other, and each stage contains one filter tray with a
geared moving mechanism allowing for very precise setting of grad filters.
Swing-away mechanism. Can be equipped with top and side eyebrows.
6.6x6.6 Production Matte Box: Covers lenses 12mm and up, as well as most
zooms. Interchangeable two, four or six filter stages, rotatable 360°, swing-
away for changing lenses. Geared filter frames.
MB-14W: same as MB-14, but with a wider front piece for 9.8mm lenses or
longer.
MB-14C: same as MB-14, but with a shorter front piece for close-up lenses.
LMB-3 (4x4 clip-on): A very lightweight matte box that clips to the front of
87mm or 80mm lenses, holding two 4″x4″ filters. When using prime lenses
with a 80mm front diameter (most Arri/Zeiss prime lenses), a Series 9 filter
can be added with an adapter ring. Shade part can be easily removed from
the filter stages if only the filter stages are needed. Can be used for 16mm
prime lenses 8mm–180mm and 35mm prime lenses 16mm–180mm. It also
attaches to the 16mm Vario-Sonar 10–100mm or 11–110mm zoom lens.
LMB-5 (4x5.650 clip-on): A matte box that clips onto the front of the lens,
holding two 4″x5.6″ filters. Can be attached to lens front using clamp
adapters of the following diameters: 80mm, 87mm, 95mm and 114mm. Can
be equipped with a top eyebrow.
LMB-4 (6.6x6.6 clip-on): A matte box that clips onto the front of the lens,
holding two 6.6″x6.6″ filters. Can be used on 156mm front diameter lenses
(like the Zeiss T2.1/10mm) or, with an adapter, on 144mm front diameter
lenses (like the Zeiss T2.1/12mm).
Additional Accessories: Bridge plate support system for CG balance and mount
for matte box, follow focus, servo zoom drive, and heavy lenses; handheld
rig for shoulder operation of the camera.
Many good aftermarket matteboxes and accessories from Chrosziel,
Cinetech and many others.
Bell & Howell 35mm Eyemo Model 71
Eyemo K
Notes, Models: “Beats the Other Fellow to the Pictures” (from original 1926 ad)
Eyemo Q
Notes: Very similar to the Platinum Panaflex. Incorporates most of the features
and operates with most of the accessories listed for that camera.
Weight: 24.4 lbs/11.08 kg (body with short eyepiece).
Movement: Dual pin registration, double pulldown claws. Pitch and stroke
controls for optimizing camera quietness. 4-perf movement standard, 3-perf
available. Movement may be removed for servicing.
FPS: 4–34 fps (forward only), crystal controlled at 24, 25, 29.97, and 30 fps.
Aperture: .980″ x .735″ (24.89mm x 18.67mm) Style C (SMPTE 59-1998).
Aperture Plate removable for cleaning. Full-frame aperture is standard,
aperture mattes used for all other frame sizes. A special perforation-locating
pin above the aperture ensures trouble-free and rapid film threading.
Interchangeable aperture mattes are available for academy, anamorphic,
Super 35mm, 1.85:1, 1.66:1, and any other.
dB: Under 24dB with film and lens, measured 3 from the image plane.
Displays: Camera-left LED display readout with footage, film speed and low
battery.
Lens: Panavision mount. All lenses are pinned to ensure proper rotational
orientation. (Note: This is particularly important with anamorphic lenses.)
Super 35mm conversion upon request.
Behind-the-lens gel filter holder.
Iris-rod support on camera right side. A lightweight modular follow focus
control works on either side of the camera; optional electronic remote focus
and aperture controls.
Shutter: Reflex rotating mirror standard—independent of the focal-pane shutter.
Interchangeable, semisilvered, fixed (not spinning) reflex mirror (pellicle)
for flicker-free viewing upon request. Focal-plane shutter, infinitely variable
and adjustable in-shot. Maximum opening 200°, minimum 50°, with
adjustable maximum and minimum opening stops. Adjustable for
synchronization with monitors, etc. Manual and electronic remote-control
units.
Notes: Similar to the GII Golden Panaflex but has a fixed viewfinder system and
is not hand-holdable.
Weight: 20.5 lbs/9.31 kg (body only).
Movement: same as GII
FPS: Same as GII: 4–34 fps (forward only), xtal 24, 25, and 29.97.
Aperture: same as GII
dB: Under 24dB(A) with film and lens, measured 3 from the image plane.
Displays: Single-sided LED display readout with footage and film speed. Same
as GII
Lens: Same as GII. Same behind-the-lens gel filter holder.
Shutter: Same as GII. 200°–50°
Viewfinder: Nonorientable.
Video: CCD video systems in black-and-white or color.
Viewing Screen: Interchangeable, same as GII
Mags: Same as GII
Accessories: Matte boxes: Same as GII.
Electronic Accessories: Same as GII.
Optical Accessories: Same as GII.
Environmental Protection Equipment: Same as GII.
Camera Support Equipment: Same as GII.
Motors, Power: same as GII (24V DC) and Millenium.
Misc: Camera cannot be handheld.
Notes: Reflex rotary prism camera 125–2,500 fps. This is a rotary prism camera
and is not pin-registered.
Film Specifications: 35mm B&H perforation .1866″ pitch. 1,000″ (300m) loads
preferable.
Weight: 125 lbs./56.81 kg. with loaded 1,000 (300m) mag, without lens.
Movement: Rotary prism. Continuous film transport. Rotary imaging prism.
FPS: High-speed system: 500–2500 fps in 500-frame intervals. Low-speed
system: 250–1,250 fps in 250-frame increments. Special low-speed motor,
125–625 fps available on request.
Aperture: Full-frame 35mm.
Displays: Mechanical footage indicator. Camera ready and power indicators.
Lens: Pentax 6x7 lens mount. 17mm through 165mm Pentax lenses, in addition
to zooms and Probe II lenses.
Shutter: Rotary disc, 4C has 72-degree fixed shutter. 36°, 18° or 9° available
upon request.
Viewfinder: The 4C utilizes a Jurgens orientable reflex viewfinder system with
a behind-the-lens beamsplitter to achieve flickerless reflex viewing.
Extension eyepiece with leveling rod.
Video: CEI Color V or III available in NTSC version only.
Viewing Screen: Standard ground glass formats available. Specify when
ordering camera package to ensure availability.
Mags: 1000″ (300m). Double chamber. 35mm B&H perforation .1866 pitch.
Film must be rewound onto dynamically balanced aluminum cores. Twelve
cores with each 4CR rental package.
Accessories: Follow Focus: Arri follow focus.
Matte Boxes: 6.6.x6.6 Arri matte box. Heavy-duty tilt plate, high and low hat,
flicker meter, 90° angle plate. Raincovers are included with camera package.
Notes: The 4ER pin registered camera produces solid registration at frame rates
from 6–360 fps. Camera is compatible with Unilux strobes (mid-shutter
pulse).
Film Specifications: Standard 35mm B&H perforation with .1866″ (4.74mm)
pitch. 1,000″ (300m) loads preferable).
Weight: 125 lbs./56.81 kg with loaded 1,000″ (300m) magazine, without lens.
Movement: Intermittent, pin registered. Intermittent with four registration pins,
twelve pull-down arms and a vacuum pressure plate to hold the film
absolutely stationary and registered during exposure.
FPS: 6–360 fps.
Aperture: Full-frame 35mm. Removable aperture plate.
Displays: Mechanical footage indicator.
Lens: BNCR or Panavision. (Academy centered.)
Shutter: 5° to 120°, adjustable with mechanical indicator.
Viewfinder: The 4ER utilizes a Jurgens/Arriflex reflex, orientable viewfinder
system with a behind-the-lens beamsplitter to achieve flickerless reflex
viewing. Optional external boresight tool available.
Video: CEI Color V or III availabe in NTSC version only.
Viewing Screen: Most standard ground glasses are available. Specify when
ordering camera package to ensure availability.
Mags: 1,000″ (300m). Double chamber. Standard plastic 35mm 2″ film cores.
35mm B&H perforation .1866″ pitch.
Accessories: Environmental Protection Equipment: Same as 4C.
Camera Support Equipment: Same as 4C. Arri follow focus unit with right-
hand knob. Zoom Control and Iris Gears available on request.
Electronic Accessories: Optional remote cables, 75′ (23m) and 150″ (45.7m);
remote speed indicator.
Additional Accessories: Extension eyepiece with leveling rod, heavy duty tilt
plate, high and low hat, flicker meter, 90-degree angle plate, Panavision lens
mount and “L” bracket support adapter with 5⁄8 support rods, extension
eyepiece, follow focus.
Optical Accessories: Various Zeiss T1.3and Nikkor primes, in addition to
macro and zoom and Probe II lenses.
Motors, Power: 208V AC, single phase (200V to 250V AC is acceptable). 35
amps surge at 360 fps and 20 amps running. Cannot use batteries. Requires
AC power 208V AC single phase.
Photo-Sonics 35mm 4E and 4ER Diagrams
Figure 23a & 23b. Film takes up emulsion side out.
Photo-Sonics 35mm 4ML (Reflex and Nonreflex)
Notes: The 4ML reflex is a compact, rugged, pin registered, high-speed camera
capable of crystal-controlled filming speeds from 10-200 fps. The 4ML
reflex can be configured at only 9″ total height with prime lenses and a 400″
(122m) magazine. Only 51⁄2″ with a 200″ (61m) magazine. Compatible with
Unilux strobe lighting (midshutter pulse).
Standard B&H perforation .1866″ (4.74mm) pitch.
Weight: 28lbs./12.72kg with loaded 400″ (122m) mag, without lens.
Movement: Intermittent with two registrations pins and four pulldown arms.
FPS: 10–200 fps.
Aperture: .745 x .995 (18.92mm x 25.27mm). Academy centered.
Displays: Digital readout plate w/accessory port.
Lens: Lens Mount: BNCR, Panavision, Nikon, PL (Warning: Depth restriction
with certain lenses—restricted to certain zooms and longer primes.) Various
Nikkor lenses (extension tube set for close-focus available), Probe II lens
Shutter: 144° maximum, adjustable to 72°, 36°, 18°, and 9°.
Photo-Sonics 35mm 4ML Magazine Diagram
Figure 24. Film takes up emulsion side in.
Viewfinder: A behind-the-lens beamsplitter block provides flickerless reflex
viewing. Extension eyepiece. External Viewfinder: Nonreflex model utilizes
a boresight tool.
Video: CEI Color V or III available in NTSC version only.
Viewing Screen: TV/Academy/1:1.85 combo standard. Specify ahead for
different ground glass.
Mags: 200″ (61m) and 400″ (122m) displacement, snap-on magazines for quick
reloading. Single chamber with daylight cover. 2″ film cores.
Accessories: Clamp-on 4x4 two-stage Arri Studio matte box.
Follow Focus: Arri follow focus unit with right-hand knob.
Electronic Accessories: Digital readout plate with accessory port, crystal-
controlled filming speeds from 10–200 fps, Unilux strobe lighting
(midshutter pulse).
Additional Accessories: Compatible with Panavision and Arri lens accessories.
Environmental Protection Equipment: Splash housings depth-rated 12″
(3.6m)–15″ (4.6m), rain covers.
Power: 28V DC. 12 amps surge, 7 amps running.
Ultracam
Weight: 13 lbs./6kg with 400″ (122m) load, 18 lbs/8.5 kg with 800″ (244m)
load, and 12V onboard battery.
Movement: Coplanar single claw movement with lateral pressure plate that
ensures vertical and lateral steadiness to 1⁄2000th of image dimensions. Hair-
free gate.
FPS: 18 sync speeds including 23.98, 24, 29.97, 30, 48 and 75 fps and crystal-
controlled adjustable speeds from 3–75 fps in 0.001 increments via a
minijog. Internal phase shift control for TV bar elimination.
Aperture: 1.66. Optical center is switchable for Super 16 and standard 16mm
operation.
dB: 20dB -1/+2.
Displays: Illuminated LCD display, speed selection, elapsed footage, remaining
footage, ISO selection, battery voltage timer and date. Pre-end and end of
film warning, mag ID, full time code readout. Memo-mag allows magnetic
recognition by the camera body of seven different magazines. Counter
provides LCD display of remaining footage for short-end load or multi-
emulsion shoot.
Lens: Interchangeable hard fronts: Arri PL as standard, Aaton, Panavision.
Quick centering of lens axis for 16mm to Super 16 conversion formats.
Field-convertible quick centering of lens axis, viewfinder and CCD target
between formats.
Shutter: Reflex mirror shutter is user adjustable: 180°, 172.8°, 150° for 25 fps
under 60Hz lighting, 144°, 135°, 90°, 60°, 45°, 30°, 15°.
Viewfinder: Reflex from shutter. Fiber-optic imaging finder field is 120 percent
of standard 16mm frame. Swiveling auto-erect image eyepiece with 10X
magnification. 20cm or 40cm extensions and left-eye extender available.
Field interchangeable standard 16mm/Super16 ground glass with Aatonite
markings available. Built-in light meter display in viewfinder, also indicates
low battery, out of sync and pre-end and end of film warnings.
Notes: Possibly the smallest, lightest MOS Super 16 camera, for the price of a
DV camera: About the size of the old GSAP, much lighter, and uses readily
available 100″ daylight spools.
Weight: 3.3 lbs/1.5kg with lens, internal battery and film.
Movement: Single transport claw
FPS: 6, 10, 12.5, 18, 20, 24, 25, 30, 36, 37, 37.5; timelapse from 1 fps to 1
frame in 24 hours.
Aperture: Super 16
Aperture Plate: Super 16 only. Super 16-centered lens port and viewfinder.
Displays: Backlit LCD main readout with three control buttons shows battery
status, film exposed, frame rate and time-lapse mode.
Lens: C mount. Comes with Kinoptik 9mm f/1.5.
Shutter: 160°
Viewfinder: Nonreflex, magnetically-attached, parallel-mounted ocular.
Loads: 100″ (30.5 m) daylight spools.
Accessories: 1⁄4″ x 20″ and 3⁄8 x 16″ tripod mounting threads.
Ikonoskop A-Cam SP-16 Magazine Diagram
Figure 27. Film takes up emulsion side in.
Connections: Main power switch. Start/stop button. Connector for charger and
external 12V DC power (runs on 10.8-15 V DC). Requires external power
for speeds of 30 and 37.5 fps. Remote on/off connector.
Power: Batteries: Camera comes with single-use lithium battery pack. 1400
mAh, 70 g. Runs 25 rolls at 25 fps at 20°C. A charger or an external power
supply must never be connected to the camera when using a single-use
lithium battery pack.
Optional internal rechargeable Li-Ion battery, 480mAh, 46g, Runs 25 rolls at
25 fps at 20°C. Battery is located under the battery compartment cover on
the left side of the camera.
Arriflex 16 BL
Weight: 16.3 lbs./7.39 kg, camera, 400 (122m) magazine and lens
Movement: Pin registered.
FPS: 5–50 fps, forward or reverse, when used with appropriate motor and speed
controls.
Aperture: .405 x.295 (10.3mm x 7.5mm). Standard 16mm.
dB: 30dB.
Displays: Tachometer and footage counter.
Lens: Lens Mount: Steel Arri Bayonet mount (lens housings required to
maintain minimal camera operating sound levels). All Arriflex Standard or
bayonet-mount lenses that cover the 16mm format can be used with lens
housings. Standard zoom and telephoto lenses should be used with the
bridgeplate support system. Lenses: Fixed focal length Standard and Zeiss
Superspeed lenses. Zeiss, Angenieux and Cooke zoom lenses. Some wide-
angle lenses may hit shutter.
Shutter: Rotating mirror-shutter system with fixed 180-degree opening (1⁄48th of
a second at 24 fps).
Arriflex 16BL Single System Threading Diagram
Figure 28a. Film takes up emulsion side in.
Viewfinder: Reflex Viewfinder: High-aperture/parallax-free viewing, 10X
magnification at the eyepiece. Offset finder accessory available for handheld
camera applications for additional operator comfort. Finder extender also
available. APEC (Arri Precision Exposure Control): Exposure control
system meters behind the lens and displays continuous exposure information
(match-needle mode) in the viewfinder.
Video: May be attached to eyepeiece.
Viewing Screen: Noninterchangeable ground glass. Requires trained tech to
change, using special tools.
Mags: 200″ (61m), 400″ (122m) forward and reverse, and 1200 (366m)
forward-only magazines. Magazine loading: displacement.
Film Cores: 2 plastic cores. Film core adapter removable to adapt 100″ (30.5m)
daylight spools.
Accessories: Universal lens housing for use with fixed focal-length lenses when
minimal camera operating sound level is required (accepts 3x3 or a 94mm
diameter filter).
Arriflex 16BL Double System Threading Diagram
Figure 28b. Film takes up emulsion side in.
Electronic Accessories: Variable speed controls. Jensen, CPT, Tobin.
Additional Accessories: Plug-in single-system sound module and single-system
record amplifier. Handholdable.
Optical Accessories: Periscope finder orients image.
Batteries: 12V DC. Accepts blocks and belts.
Camera Support Equipment: Offset finder, assorted lens blimps, speed
control, bellows matte box, sound module.
Matte Boxes: Bellows type available for all 16BL lens housings.
Motors, Power: Two motor drive systems available. The 12V DC quartz-
controlled motor provides cordless sync-control and automatically stops
shutter in viewing position. Speed range is 6, 12, 24 (quartz-controlled) and
48 fps. The Universal motor is transistorized and governor-controlled. A
variable speed control accessory will drive the universal motor from 10 fps
to 40 fps.
Arriflex 16S; 16M
Arriflex M
Models: About 20,000 S (for Standard) cameras made, and 1,500 M (for
Magazine) cameras. 16S/B; 16S/B-GS; 16M/B. Main difference—you can
use 100″ daylight spools in the Arri S body without magazines; Arri M only
uses magazines.
Arriflex 16S/B: accepts 100″ (30.5m) daylight spools in body as well as top-
mounting torque-motor driven 200″ (61m) and 400″ (122m) magazines.
Arriflex S
Notes: Over 6,000 still in use. Many conversions and aftermarket adaptations.
Weight: 11–12 lbs./5–5.5 kg, body and magazine, without film and lens.
Movement: Single pull-down claw; single registration pin, with fixed-gap film
channel.
FPS: 16SR-1 and 16SR-2 from 5-75 fps with external variable speed control.
16HSR-1 and -2 High-Speed from 10–150 fps with external variable speed
control.
Switches located in the camera base of early versions lock in crystal speeds
of 24 and 25 fps, 50 and 60Hz and, in later SR cameras, 30 fps. All 16SRs
can be modified with a 30 fps kit.
Aperture: Aperture plate is fixed. Standard cameras can be modified to Super
16. Aperture of regular 16SR camera is .295″ x .405″ (7.5mm x 10.3mm);
aperture of Super16 camera is .295″ x .484″ (7.5mm x 12.3mm).
Super16 conversion cannot be done in the field—requires repositioning of
the optical center axis of lens mount, viewfinder, tripod thread and accessory
holder by 1mm to the left. Height of Super 16 aperture is identical to
Standard 16, but the aperture is 2mm wider, pushing into the left perf area on
the negative—which is why you use single-perforation film stock.
Arriflex 16 SR 1, 2 and 3 Magazine Diagram
Figure 30. Film takes up emulsion side in.
dB: 22dB–28dB (±2dB)
Displays: Footage remaining on back of magazine, and footage shot on take-up
side of magazine (dial settable).
Lens: Bayonet on earlier models and PL on later models and conversions.
Lens Control: Arri Follow Focus 2 or FF3. Preston Microforce or Arri
LCS/wireless zoom control.
Shutter: Rotating mirror-shutter.
Viewfinder: Reflex, swing-over viewfinder with parallax-free viewing and 10X
magnification at the eyepiece. Swings 190° to either side of the camera for
left- and right-side operation. The finder also rotates 360° parallel to camera
on either side and swings out 25°. Red out-of-sync LED, and APEC
exposure indicator. APEC through-the-lens system provides continuous
exposure information (match-needle mode) on four-stop indicator displayed
in viewfinder. For film speeds ASA 16–1000. Optional servo-operated
automatic exposure control system (with manual override) for complete
automatic exposure control with auto-iris lenses available. Super 16 SRs
have same exposure meter system in regular 16SRs, but automatic exposure
control feature cannot be installed.
Video: OEM removable video “T-bar” viewfinder assembly. optional and
necessary for video assist.Aftermarket video assists from Denz, P&S, CEI,
others.
FPS: 2–70 fps (speed select for either 18, 24 or 25 fps). Mechanical single
frame.
Aperture: .166″x .224″ (4.22mm x 5.69mm).
Displays: Footage.
Lens: C mount with power lens contacts. Power Zoom Lenses available are
Schneider 6–66mm or 16–70mm and Angenieux 8–64mm or 6–8mm.
Macrocinematography mode
Focusing ring. Built-in electric zoom control. On-lens variable speed. Iris
gears on lens governed by camera meter.
Built-in 85 filter, through-the-lens internal meter ISO 12–400 ASA.
Shutter: Guillotine 1⁄90th second at 24 fps and 1⁄60th at 24 fps variable with
frame rate closed lock position.
Viewfinder: Reflex viewfinder.
Viewing Screen: Focusing-screen. Retractable.
Mags: 50″ (15m) silent Super 8 or Pro8mm cartridges.
Accessories: Remote cable release.
Power: Onboard 250Ma. 7.2V/3.6V combined.
Beaulieu 6008 Pro
FPS: 23.97, 24, 25, 29.97 and 30 fps crystal controlled. 4, 9, 18, 24, 36 and 80
fps noncrystal. New crystal also features built-in phase control so TV
monitors can be shot without video bar. Single frame function with variable
shutter rate for animation effects. Intervalometer for timed exposures.
Aperture: .166″ x .224″ (4.22mm x 5.69mm). Optional mask for 1.85
Academy.
Displays: Visual display in viewfinder indicating an absolute crystal lock.
Liquid crystal display frame and centimeter counters keep track of exact
footage transported in forward and reverse. Two-stage LED informs user
when camera service is required.
Lens: C mount and custom Beaulieu power mount. Lenses: Angenieux 6-90mm
T1.6. Schneider 6-70mm. 3mm superwide prime. Superwide elements can be
removed from lens to obtain 6mm focal length. Nikon compatible 60–
300mm telephoto. Anamorphic Lens System. Aspect ratio is 2.35:1 (1.75X
squeeze) and lens is mounted with custom brackets. Same lens can also be
used with any standard projector to permit widescreen projection. C-mount
adapters allow various non-C-mount lenses to be used.
Lens Control Unit for variable power zoom and aperture control. These
features include manual override.
Viewfinder: Reflex viewfinder
Viewing Screen: Focusing screen.
Mags: 50 (15m) sound or silent Super 8 and Pro8mm cartridges. 200 (61m)
sound ctg. (Sound ctg. no longer available.)
Accessories: Additional Accessories: Motor-driven rewind is included to
perform dissolves and double exposures. Hard-shell sound blimp made from
industrial-grade aluminum and sound-absorbing foam (-22dB effective).
Rental only.
Motors, Power: 12V DC input. Regulated and fused for 12V DC battery belt.
Microprocessor crystal-controlled 1.5 amp motor.
65MM
Arriflex 765
Movement: MSM Monoblock high-speed, dual register pins, claw engages six
perfs. Shrinkage adjustment changes stroke and entry position. Indexable
loop-setting sprockets have independent locking keeper rollers. Vacuum
backplate ensures filmplane accuracy, removes without tools for cleaning.
Removable for cleaning and lubrication.
FPS: From time-lapse to 60 fps forward, also to 30 fps reverse. Crystal sync
from 5–60 fps in .001 increments.
Aperture: 2.072″ x 1.485″ (52.63mm x 37.72mm). Aperture plate removable
for cleaning and lubrication.
Displays: Status LEDs for power, heat, low battery, mag ready, buckle and
speed sync. Two illuminated LCD footage counters. Digital battery volt/amp
meter. Circuit breakers for camera, mag, heat and accessories. Control port
allows operation from handheld remote or interface with computers and
external accessories.
Lens: MSM 75mm diameter x 80mm flange depth. BNC-style lens mount is
vertically adjustable 7mm for flat or dome screen composition. Mount
accepts modified Zeiss (Hasselblad), Pentax, Mamiya and other large-format
lenses.
Shutter: Focal plane shutter, manually variable from 172.8° to 55° with stops at
144° and 108°.
MSM 8870 65mm/8 perf. Threading Diagram
Figure 35.
Viewfinder: Reflex viewfinder, spinning mirror reflex. Finder rotates 360° with
upright image, which can be manually rotated for unusual setups. Finder
shows 105 percent of frame, magnifier allows critical focusing at center of
interest. Single lever controls internal filter and douser. Heated eyepiece has
large exit pupil and long eye relief. High resolution black-and-white or
optional color CCD video tap is built into camera door with swing-away
50/50 beamsplitter. Viewfinder removes completely for aerial or underwater
housing use.
Viewing Screen: Interchangeable ground glasses with register pins for film
clips.
Mags: 1,000 (300m) displacement magazines use MSM TiltLock mount.
Magazines lock to camera with pair of 8mm hardened pins and can tilt away
from operator to allow easier camera threading. Optional minimum profile
1,000 (300m) coaxial magazines use same mount without tilt feature. Both
magazines operate bidirectionally at all camera speeds. Positive camlock
secures mag in running position and switches power to motor and heater
contacts in mag foot. Expanding core hubs have integral DC servo motors
controlled by film tension in both directions, with soft startup to eliminate
slack. Tightwind rollers guide film winding for smooth solid rolls at any
camera angle. Noncontact light traps feature infrared end-of-film sensors.
Accessories: 15mm matte rods are on Arri BL centers for accessory
compatibility and use standard Arri accessories.
Motors, Power: Integrated.
Panavision System-65 65mm
Note: There are too many digital cameras and recorders being used today,
with the technology in constant flux, to create an all-inclusive list; these entries
represent a selection of professional cameras and recorders commonly being
used for cinema applications. Specifications can change due to updating of
firmware and software over time.
Cameras
ARRI ALEXA
ARRI ALEXA M
ARRI ALEXA Plus
ARRI ALEXA Studio
ARRI D-21
Canon C300
Panasonic AJ-HDC27H VariCam
Panasonic AJ-HPX3700 VariCam
Panavision Genesis
RED EPIC
RED ONE
RED Scarlet-X
Silicon Imaging SI-2K
Sony F23 CineAlta
Sony F35 CineAlta
Sony F65 CineAlta
Sony HDC-F950 CineAlta
Sony HDW-F900 CineAlta
Sony HDW-F900R CineAlta
Sony PMW-F3 CineAlta
Vision Research Phantom Flex
Vision Research Phantom HD Gold
Vision Research Phantom Miro M320S
Vision Research Phantom 65 Gold
Recorders
Cinedeck EX
Codex ARRIRAW
Codex Onboard M
Codex Onboard S
Convergent Design Gemini
Panavision SSR-1
Sony SR-R1
Sony SR-R4
Sony SRW-1
S.Two OB-1
Vision Research Phantom CineMag-II
CAMERAS
ARRI ALEXA
Overview: Based on the ALEXA Plus 4:3, but with the camera head containing
the sensor and lens mount is separate from the body containing the image
processing and recording. Ideal for dual-camera 3-D rigs and other situations
where a small and lightweight camera body is needed (Steadicams, aerial
and underwater photography, etc.) See ALEXA and ALEXA Plus 4:3 specs
for anything not listed below.
Weight: Camera head: 2.9 kg (6.4 lb)/body: 5.5 kg (12.1 lb).
Body dimensions (LxWxH): Camera head: 21.1cm x 12.9cm x 14.9cm (8.31″ x
5.08″ x 5.87″)/body: 32.3cm x 15.3cm x 15.8cm (12.72″ x 6.02″ x 6.22″).
Additional connectors: SMPTE 304M hybrid fiber optical link (LEMO / one
each on camera head and body). Camera head also has its own 24V DC in
(2-pin Fischer) and 24V DC remote start/accessory power out (3-pin Fischer
x2).
Power: Minimum 15V DC input to body is required to power camera head
through SMPTE hybrid fiber cable up to 50m, w/o accessories. Camera head
has one 10.5V to 34V DC power input used to power the head independently
from the camera body.
Power consumption: 40W for camera head; 85W for body.
ARRI ALEXA Plus
Overview: Shares most of the same technical data as the ALEXA, but with
built-in support for the ARRI wireless remote system,cmotion cvolution lens
control system, and ARRI LDS (lens data system) including lens data mount
and lens data archive for lenses without built-in LDS. Also has one
additional monitor out, one additional RS (remote start), two LCS (lens
control system), one LDD (lens data display) and three lens motor
connectors, built-in motion sensors and Quick Switch BNC connectors.
Assistive displays on EVF-1 and monitor out: electronic level (horizontal
gauge). Automated sync of lens settings for 3-D applications in master/slave
mode. SD card for importing custom lens tables for the lens data archive.
ALEXA Plus 4:3 model allows a taller 4:3 area of the sensor to be used,
which is ideal for anamorphic photography. Anamorphic desqueeze
w/licence key. Only specs which differ from the standard ALEXA are listed:
Weight: Body only: 7.0 kg (15.4 lb)/body + EVF w/mounting bracket and
handle: 8.4 kg (18.5 lb).
Body dimensions (LxWxH): 33.2cm x 17.5cm x 15.8cm (12.95″ x 6.89″ x
6.22″).
Sensor: S35mm CMOS with Bayer CFA; dual gain architecture (DGA). 3392 x
2200 pixels. 4:3 model: 3168 x 2160 active photosites for Surround
View/recorded area is 2880 x 2160 pixels. 16:9 model: 3168 x 1940 for
Surround View/recorded area is 2880 x 1620 pixels.
Active sensor dimensions: 4:3 model: Surround View for EVF is 26.14mm x
17.82mm (1.029″ x .702)/recorded area is 23.76mm x 17.82mm (.935″ x
.702″)/image circle is 29.70mm (1.169″). 16:9 model: Surround View for
EVF is 26.14mm x 16.00mm (1.029″ x 0.630″). Recorded area is 23.76mm x
13.37mm (.935″ x .526″)/image circle is 27.26mm (1.073).
Video connectors: HD-SDI/1.5G/3G/T-Link out (BNC x2), monitor out (BNC
x2).
Power connectors: 24V DC in (2-pin Fischer), 12V DC out (2-pin LEMO), RS
24V DC out (3-pin Fischer x3).
Other connectors: Ethernet (10-pin LEMO), audio in (5-pin XLR), time code
(5-pin LEMO), EVF (16-pin custom LEMO), RET/Sync-In (BNC), EXT
(16-pin LEMO), stereo headphone (3.5mm TRS), lens control system (5-pin
Fischer x2), lens data display (5-pin Fischer), iris (12-pin Fischer), zoom
(12-pin Fischer), focus (12-pin Fischer).
Power: 24V DC (10.5V to 34V).
Power consumption: 85W for camera + EVF w/o accessories.
ARRI ALEXA Studio
Overview: A 2⁄3 three-sensor 1080P camcorder that records onto P2 cards using
AVC-Intra 100 (10-bit 4:2:2 full raster sampling), AVC-Intra 50 and
DVCPRO HD codecs. Dual-link HD-SDI output allows external recorder to
capture 10-bit 4:4:4 Log. Scan reverse (image flip) function can be used to
correct image inversion from certain lens adaptors. Seven gamma modes
including film-rec. Gamma corrected for monitor and viewfinder display can
be selected separately from recorded gamma.
Weight: Body only: approx. 4.5 kg (9.9 lb)/typical configuration w/ENG lens:
7.0 kg (15.4 lb).
Body dimensions (LxWxH): 13.2cm x 20.4cm x 31.3cm (5.25″ x 8″ x 12.31″).
Sensor: Three 2⁄3 CCD (IT type), 1920 x 1080 active photosites per sensor.
Active sensor dimensions: 9.58mm x 5.39mm (.3772″ x .2122″).
Quantization: 14-bit A/D.
Lens mount: 2⁄3 bayonet type (B4).
Sensitivity: f/10.0 at 2000 lux.
Frame rates: 1–30P (in single increments), interval recording.
Electronic shutter: 1⁄60, 1⁄100, 1⁄120, 1⁄250, 1⁄500, 1⁄1000, 1⁄2000. Shutter angles can be
selected. Synchro scan function.
Built-in filters: Dual-stage: A: Cross, B: 3200K (clear), C: 4300K, D: 6300K,
1: clear, 2: 1⁄4ND, 3: 1⁄16ND, 4: 1⁄64ND.
Electronic viewfinder: Standard 2 black-and-white 720P CRT viewfinder (AJ-
HVF27); color viewfinders available from third-party vendors.
On-set monitoring: 1080P HD output (single-link HD-SDI).
Recording device: Internal AVC-Intra 100/50 and DVCPRO HD to P2 cards
(16 GB, 32 GB, 64 GB). SD/SDHC memory card slot allows metadata files
to be recorded; also allows scene files and firmware updates to be loaded.
Recording format: 1080P (59.94 or 60Hz) 10-bit 4:2:2 (single-link HD-SDI) or
4:4:4 (dual-link HD-SDI). DVCPRO HD recording reduces this to
compressed 8-bit 4:2:2 1440 x 1080 pixel signal at 100 Mbps.
Menu display/controls: Thru viewfinder or external monitor. Side LCD panel
displays TC, audio levels, battery level, tape length, etc. Also switches for:
recording start/stop, two user assignable switches, power on/off, gain
(L,M,H), save/standby, bars/camera, WB preset/A/B, audio levels, setting
TC, switching shutter speed, white/black balance.
Indicators: Tally, error.
Operating temperature: 0°C to + 40°C (+32°F to +104°F).
Operating humidity: Less than 85%.
Recoding time: Five P2 card slots, 64 GB per card, allows 800 minutes of
24P/1080 content.
Video connectors: Single-link HD-SDI out (BNC x2), tri-level sync (BNC),
time code in (BNC), time code out (BNC).
Audio connectors: Audio in (Ch1/Ch2, female 3-pin XLR), audio out (male 5-
pin XLR), mic in (phantom +48V female 3-pin XLR), stereo headphone
(3.5mm minijack).
Power connectors: 12V DC in (male 4-pin XLR), 12V DC out (female 4-pin,
11 to 17V DC, max 100mA).
Other connectors: Lens (12-pin), remote (8-pin), ECU (6-pin), EVF (20-pin).
Power: 12V DC (11V to 17V).
Power consumption: 33W (w/o VF, SAVE REC MODE) / 39W (typical set-up
and conditions).
Panavision Genesis
Overview: A 1080/60P 4:4:4 camera with three Power HAD EX 2⁄3 (RGB)
CCDs. With a wide color gamut prism block and digital signal processing up
to 36 bits, color space is not limited to broadcast ITU-709 standards
(television color palette). Electronic viewfinder only. The F23 records to an
attached SRW-1 tape recorder or SR-R1 memory recorder; the camera signal
can also be sent to an external recorder. With attached SRW-1 or SR-R1, the
F23 can record 1–60 fps for overcranking, undercranking, speed ramp
effects, time lapse and interval recording. Results can be reviewed
immediately on the set without any additional gear. Camera operates in
either cine mode using S-Log gamma setting allowing 700% dynamic range
or in custom mode with full engineering control available including
hypergamma. Reinforced steel B4 lens mount.
Weight: Approx. 5.0 kg (11 lb); with attached SRW-1 approx. 9.5 kg (21 lb).
Body dimensions (LxWxH): body only: 21.6cm x 19.9cm x 20.4cm (85⁄8″ x
77
⁄8″ x 81⁄8″); w/top-mounted SRW-1 VTR (LxH): 29.4cm x 41.2cm (115⁄8″ x
161⁄4″); w/rear-mounted SRW-1 VTR (LxH): 38.8cm x 30.7cm (153⁄8″ x
121⁄8″).
Sensor: Three 2⁄3 CCD (Hyper HAD EX), 1920 x 1080 active photosites per
sensor.
Active sensor dimensions: 9.58mm x 5.39mm (.3772″ x .2122″).
Quantization: 14-bit A/D.
Lens mount: 2⁄3 bayonet type (B4).
Built-in filters: Dual-stage: A: 3200°K (clear), B: 4300°K, C: 5600°K, D:
6300°K, E: 1⁄2ND ND1: CLEAR, ND2: 1⁄4ND, ND3: 1⁄16ND, ND4: 1⁄64ND,
ND5: CAP.
Sound level: Camera only (tethered VTR): 22.5dB(A). Camera with VTR rear-
mounted: 26dB(A). Camera with VTR top-mounted: 26dB(A). Camera
w/SSR rear-mounted: 25 db(A). Camera with SSR top-mounted: 25.5 db(A).
(All values 3 feet from film plane).
Sensitivity: f/10.0 at 2000 lux, 89.9% reflective. At 24 fps with a 1⁄48-sec.
shutter speed, the exposure index is approx. equivalent to ISO 400.
Frame rates: Fixed speeds are 23.98P, 24P, 25P, 29.97P, 30P, 59.94P, 60P, and
59.94i and 60i. Variable rates: 1–60 fps (4:4:4 or 4:2:2)/interval recording,
time lapse and speed ramps possible.
Electronic shutter: Variable mode shutter angles from 4.2° to 360° are
achievable. Settings are either OFF, or VARIABLE (4.2°–360°) or STEP
(eight user-defined settings chosen from 4.2°–360°).
Electronic viewfinder: Sony HDVF-C30W LCD color viewfinder (w/ 3.5 LCD
screen).
On-set monitoring: 1080P/1080i (single or dual link HD-SDI) / SD
(NTSC/PAL).
Recording device: Attached Sony SRW-1 HDCAM SR VTR or SR-R1 memory
recorder, or cabled to an external recorder via single or dual-link HD-SDI.
Video is fed automatically to SRW-1 / SR-R1 through top or rear connector.
See separate Sony SRW-1 and SR-R1 entries for additional details.
Recording format: 1080P or 1080i, 10-bit 4:4:4 (RGB) or 4:2:2 (YCbCr) using
HDCAM-SR compression. SR-Lite (220 Mbps) and SR-SQ (440 Mbps) are
supported as standard; SR-HQ (880 Mbps) is supported on SRW-1. SR-HQ
and uncompressed recording are supported on SR-R1 as an option with the
SRK-R311. S-Log, Hypergamma, and Rec.709 gamma options.
Menu display/controls: LCD viewfinder, monitor out or side blue EL display
panel. Programmable LCD and EL panel menu gives fast access to
frequently used camera settings. Also: recording start/stop, three assignable
switches, on/off. Separate assistant panel (wired remote control device) with
blue EL display also available.
Indicators: Tally/error/assignable.
Operating temperature: 0°C to +40°C (+32°F to +104°F).
Operating humidity: Less than 85%.
Video connectors: Genlock in (BNC), test out/VBS/HD Y (BNC), monitor
out/single-link HD-SDI out (BNC x2), dual-link HD-SDI out or two HD-
SDI 4:2:2 via Interface Box (BNC x2).
Audio connectors: audio in (3-pin XLR x2).
Power connectors: 12V/24V DC in (8-pin male LEMO), DC in (4-pin male
XLR, camera only via interface module), 12V DC out (11-pin Fischer, max
4A), 24V DC out (3-pin Fischer, max 5.5A).
Other connectors: Lens (12-pin Hirose), RMB/MSU remote (8-pin Hirose),
viewfinder (20-pin Sony x2), external I/O (5-pin female LEMO), network
(RJ-45, 10Base-T/100Base-TX).
NOTE: SRW-1 only has Ch1/Ch2 audio input, time code in/out, and an
earphone jack. All other inputs and outputs are on the SRPC-1. For this
reason, when separating the SRW-1 unit from the Sony F23 camera, you
need to attach an interface box to the camera and use the SRPC-1 with the
SRW-1. Then you can connect the camera to the separate recorder through
HD-SDI cables for picture.
Power: 12V DC (10.5V to 17V) / 24V DC (pass-thru for accessory out only).
150W power supply recommended.
Power consumption: 56W (w/o lens, viewfinder, at 24P).
Sony F35 CineAlta
Overview: A 1080/60P 4:4:4 camera with three 2⁄3 (RGB) CCDs. Requires
external HD recorder. Electronic viewfinder only. The HDC-F950 provides
full-bandwidth digital 4:4:4 RGB signal processing and output capability.
4:4:4 HD can be sent via dual-link HD-SDI to recorder, or via single optical
fiber cable to HDCU-F950 Camera Control Unit or to SRPC-1 processor
unit (w/ HKSR-101 option installed) for SRW-1 HDCAM SR recorder. The
HDC-F950 has the ability to do time exposures (slow shutter). The HKC-
T950 HD CCD block adaptor is a small unit containing just the lens mount
and imaging block, separated from the DSP inside the main camera body by
up to 10 meters, or 50 meters with an optional cable. This has been adapted
for 3-D systems (using two “T-blocks” side by side), plus unique shooting
situations that require a small camera unit.
Weight: Body only: approx. 5.1 kg (11.24 lb).
Body dimensions (LxWxH): 36.0cm x 13.3cm x 27.6cm (14.17″ x 5.24″ x
10.87″).
Sensor: Three 2⁄3″ CCD (FIT type), 1920 x 1080 active photosites per sensor.
Active sensor dimensions: 9.58mm x 5.39mm (.3772″ x .2122″).
Quantization: 14-bit A/D.
Lens mount: 2⁄3 bayonet type (B4).
Sensitivity: f/10.0 at 2000 lux, 89.9% reflective. At 24 fps, 1⁄48-second shutter
speed, the exposure index is approx. equivalent to ISO 320.
Frame/field rates: 23.98P, 24P, 25P, 29.97P, 30P, 50i, 59.94i, 60i
Electronic shutter: (24P mode): off, 1⁄32, 1⁄48, 1⁄60, 1⁄96, 1⁄125, 1⁄250, 1⁄500, 1⁄1000.
ECS (ClearScan) 24–2200Hz (minimum setting depends on frame rate
selected).
Built-in filters: dual-stage: A: Cross (or 5600°K on later versions), B: 3200°K,
C: 4300°K, D: 6300°K, 1: clear, 2: 1⁄4ND, 3: 1⁄16ND, 4: 1⁄64ND.
Electronic viewfinder: 2″ black-and-white CRT viewfinder (HDVF-20A) or
color LCD viewfinder (HDVF-C30W); other viewfinders available.
On-set monitoring: 1080P/1080i HD (single-link HD-SDI).
Recording device: Any external HD recorder.
Recording format: 1080P or 1080i 10-bit 4:2:2 YCbCr (thru single-link HD-
SDI) or 10-bit 4:4:4 RGB (thru dual-link HD-SDI or optical fiber cable).
Menu display/controls: Viewfinder (for access to menu using thumbwheel) or
side LCD panel (for TC, audio, etc.) Also switches for: record start/stop, two
assignable switches, power on/off. white balance, etc.
Indicators: Tally/error.
Operating temperature: -20°C to +45°C (-4°F to +113°F).
Operating humidity: 25% to 80% relative humidity.
Video connectors: Dual-link HD-SDI out (BNC x2), single-link HD-SDI out
(BNC), test out (BNC), genlock in (BNC), optical fiber out.
Audio connectors: Ch1/Ch2 audio in (3-pin female XLR x2), mic in (3-pin
female XLR), stereo headphone (3.5mm minijack).
Power connectors: 12V DC in (4-pin male XLR, 11 to 17V DC), 12V DC out
(4-pin male, 10.5 to 17V).
Other connectors: Lens (12-pin), remote (8-pin), EVF (20-pin), external I/O
(20-pin).
Power: 12V DC (11V to 17V).
Power consumption: 33W.
Sony HDW-F900 CineAlta
Cinedeck EX
T he Cine Lens List includes most of the current lenses available for rental or
purchase. Many of these lenses are no longer made, but are available used
or remounted. Many “House” brands exist that are reworked versions of still
camera lenses and older cine lenses. For complete data, see your local rental
house or lens manufacturer.
SPECIAL PURPOSE LENSES
Swing Shift Lens
The Clairmont Swing Shift Lens System consists of a multi axis moveable
lens board receiver attached to a Arriflex style PL lens mount by a rubber
bellows. Specially modified lenses are attached to the receiver board by two
captive screws. The assembly is able to move the entire lens in the following
directions: tilt up and down, swing side to side, shift position and focus right to
left, or up and down. Tilting/swinging the lens plane alters the focus;
tilting/swinging the film plane alters the shape. By combining the various
parameters of movement, different and unusual effects can be accomplished such
as increased or decreased depth of field, selective planes of focus, repositioning
of image without changing placement of the camera, correction or addition of
image distortion. The focal lengths available are, 20mm, 24mm, 28mm, 35mm,
50mm, 60mm, and 80mm.
Dynalens
An optical stabilizing device mounted on the camera optical axis for
compensating for image motion due to vibration of the camera.
A pair of gyro sensors detect rapid motion and drive two gimbal-mounted
glass plates, between which is a liquid filled cell. One plate moves around a
vertical axis and the other around a horizontal axis in a manner which deviates
the light path opposite to the vibratory movement, causing the image to stay still
relative to the image receptor (film or video).
A low-frequency-response, manually operated potentiometer on the control
module adjusts the frequency sensitivity of the unit so controlled panning or
tilting may be done.
The Dynalens is available in 2.3 diameter for 16mm film or small video
cameras and 3.8 and 8 for larger format cameras. The maximum useful angular
deviation is +6°.
EXTREME CLOSE-UP CINEMATOGRAPHY
There are two basic methods for focusing a lens on very close objects: 1) by
adding extension tubes or extension bellows and 2) by employing plus diopter
supplementary lenses in front of the normal lens.
Diopter Lenses
Extreme close-ups may be filmed by employing positive supplementary
lenses, generally of a weak meniscus form, called diopter lenses, in front of the
normal lens.
The power of these positive supplementary lenses is commonly expressed in
diopters. The power in diopters is the reciprocal of the focal length in meters.
The plus sign indicates a positive, or converging lens. Thus, a +3 diopter lens
has a focal length of 1⁄3 meter, or 39.3 inches divided by 3 or approximately 13
inches. A +2 diopter lens would have a focal length of approximately 20 inches.
A +1 would be one meter or roughly 40 inches. In other words, the positive
diopter lens alone will form an image of a distant object when held at its
respective focal length. When two such lenses are used together, their combined
power is practically equal to the sum of both powers. A +2 and a +3, for
instance, would equal a +5 and possess a focal length of approximately 8 inches
(39.3 divided by 5 equals approximately 8 ).
When two diopter lenses are combined, the highest power should be closest to
the prime lens. Plus diopters should be placed in front of the prime lens with
their convex (outward curve) side toward the subject. If an arrow is engraved on
the rim of the diopter lens mount, it should point toward the subject.
High power plus diopter lenses, such as +8 or +10, are not practical to
manufacture for large diameter prime lenses because their optical performance
would be inferior. Best screen quality results with lower power diopters. It is
better to use a longer focal length lens and a less powerful plus diopter lens, then
to employ a high power diopter on a short focal length lens.
A plus diopter lens placed in front of a conventional lens set at infinity all
form a sharp image at its particular focal length. Thus a cine lens may be focused
for extreme close-ups, without the necessity of racking it out with the extension
tubes or bellows, simply by placing a plus diopter lens, of the required focal
length, in front of it. The distance at which a diopter lens can be focused is
decreased, however, by racking the normal lens out to its nearest focusing
distance. Cine lens may be focused much closer, therefore, with the same power
diopter lens, by simply utilizing closer focus settings on the lens.
Diopter lenses may be focused at the same distance with any focal length lens,
since their power remains the same. The magnification will vary, however,
depending on the focal length of the actual camera lens employed.
The longer the focal length of the prime lens, the smaller the area covered by
the same power diopter lens. The shorter the focal length of the prime lens, the
closer the camera will have to be positioned to the subject and the more powerful
the diopter lens required- to cover the same area as a longer focal length lens.
There are several advantages in employing longer focal length lenses; a less
powerful plus diopter lens is required and results are better; more space is
available between camera and subject for lighting; the same area may be panned
with a shorter arc so the subject is not distorted.
Diopter lenses alter the basic lens design and therefore require stopping the
lens down for reasonable sharpness. Since illuminating a small area generally
presents no problem (except heat) it is a simple matter to close down to f/8 or
f/11. Diopter lenses on the order of +1⁄2, +1, +2 or +3 will give satisfactorily
sharp results with normal focal length or semi-telephoto lenses.
Plus diopter lenses shorten the focal length of the prime lens. (See: Plus
Diopter Lenses Focal Length Conversion Table page 747.) A 100mm prime lens
with a +3 diopter lens, for instance, becomes 76.91mm in focal length. The
indicated f-number, therefore, should be divided by
approximately 1.4 to get its true value. In practice, however, the use of close-
up diopter lenses does not require any change in exposure setting; because the
change in effective f-number exactly compensates for the exposure change
caused by increased image size.
Split-Field Diopter Lenses
Split-field diopter lenses are partial lenses, cut so that they cover only a
portion of the prime lens. They are generally cut in half, although they may be
positioned in front of the prime lens so that more or less than half is covered.
They may be compared with bi-focals for human vision, in which the eye may
focus near and far. They have an advantage over bifocals, however, in that they
may be focused sharply on both near and far subjects simultaneously.
The depth of field of the prime lens is not extended. The split-field diopter
lens simply permits focusing on a very close subject on one side of the frame,
while a distant subject is photographed normally through the uncovered portion
of the prime lens. Generally, the area in between will not be in focus. There are
instances, such as using a zoom lens with a small aperture at the wide-angle
position, when sharpness may extend all the way from the ultra-close-up to the
distant subject. The split-diopter equipped lens possesses two distinct depths of
field: one for the close subject (which may be very shallow or possess no depth
whatsoever) and another for the distant subject (which will be the normal depth
of field for the particular focal length lens and f-stop in use). It is important,
therefore, to exclude subject matter from the middle distance because it will
create a situation where the foreground is sharp, the middle distance out of focus
and the distant subject sharp.
Split-field diopter lenses require ground glass focusing to precisely line-up
both foreground and background subjects and visually check focus on each. This
is particularly important with zoom lenses, which may require camera movement
during the zoom.
Very unusual effects are possible, which would otherwise require two separate
shots, which would be later combined in an optical printer via a matting process.
Making such split shots in the camera permits viewing the scene as it will
appear, rather than waiting for both shots to be optically printed onto one film.
The proper power split-field diopter lens is positioned in front of the taking
lens on the same side as the near object- so that it is sharply focused on one side
of the frame. The uncovered portion of the conventional or zoom lens is focused
in the usual manner on the distant subject. (Note: Use the Plus Diopter Lenses
Focus Conversion Table on page 818 to find near and far focusing distances with
various power diopter lenses.)
The edge of the split-diopter lens should be positioned, if possible, so that it
lines up with a straight edge in the background- such as the corner of a room, the
edge of a column or a bookcase. Eliminating the edge may prove difficult under
certain conditions, particularly with a zoom lens because the edge will shift
across the frame slightly when the lens is zoomed. It is wise to leave space
between the foreground and background subjects so that they do not overlap and
each is removed from the lens edge. This will minimize “blending”. The split-
diopter need not be lined up vertically- it may be used horizontally or at any
angle to cover a foreground subject on top, bottom, either side or at an angle
across the frame.
The split may sometimes be “covered” by filming both foreground and
background against a distant neutral background for a “limbo” effect. The can of
wax, for instance, may be placed on a table so that it appears against the same
distant neutral background as the housewife.
Lighting may be employed to lighten or darken the background area where the
split occurs, to make it less noticeable. Lighting should generally be balanced so
that both near and far subjects may be varied, of course, for pictorial effects.
Either foreground or background may be filmed in silhouette, or kept in darkness
so that one or the other may be fully illuminated during the scene. Since the
diopter lens requires no increase in exposure, balancing the lights is a simple
matter.
Formulas
By R. Evans Wetmore. P. E.
ASC Associate Member
1 LENS FORMULAS
The formulas given in this section are sufficiently accurate to solve most
practical problems encountered in cinematography. Many of the equations,
however, are approximations or simplifications of very complex optical
relationships. Therefore, shooting tests should always be considered when using
these formulas, especially in critical situations.
All values in this and the following equations must be in the same units, e.g.,
millimeters, inches, etc. For instance, when using a circle of confusion value
measured in inches, the lens focal length must be in inches, and the resulting
answer will be in inches. (Note: f-stop has no dimensions and so is not affected
by the type of units used.)
As mentioned above, the circle of confusion characterizes the degree of
acceptable focus. The smaller the circle of confusion is the higher the resulting
image sharpness. For practical purposes the following values have been used in
computing depth of field and hyperfocal distances in this manual:
The following shows how the above equations can be used to make hyperfocal
and depth of field calculations:
Example: A 35mm camera lens of 50mm focal length is focused at 20 feet and
is set to f/2.8. Over what range of distances will objects be in acceptable focus?
First convert all the units to the same system. In this example inches will be
used. Therefore, the 50mm focal length will be converted to 2 inches. (This is an
approximation as 50mm is exactly 1.969 inches, but 2 inches is close enough for
normal work.) Also the 20 feet converts to 240 inches (20 × 12). The circle of
confusion is 0.001 inches for 35mm photography.
Using Equation 1 and filling in the converted values yields:
Using the hyperfocal distance just calculated and equations 2 and 3, we can
now calculate the near and far distances that will be in acceptable focus.
Therefore, when a 50mm lens at f/2.8 is focused at 20 feet, everything from
17.1 feet to 24.0 feet will be in acceptable focus. The total depth of field for this
example is:
Therefore, focus the lens to 19.3 feet, and set the f-stop to f/11.
As this is the total depth of focus, the film must stay within plus or minus half
that value which is about ±0.00275 inch or ±0.07mm. This dimension is equal to
the approximate value of a single strand of human hair. This is a very small
value indeed which further amplifies the statement above about the need for
precision in the gate and aperture area of the camera.
The inverse tangent (written as atan, arctan, or tan-1) can be found with many
pocket calculators. Alternately Table 1 relates atan to θ
Example: What are horizontal and vertical viewing angles for a 75mm Scope
lens?
A typical Scope camera aperture is 0.868″ wide by 0.735″ high. Converting
75mm to inches yields 2.953 inches (75÷25.4 = 2.953)
1.5 Lens, subject, distance, and image size relationships
Using the simple drawing on the previous page, the relationships between
camera distance, object size, image size, and lens focal length for spherical
lenses may easily be calculated in the following equation:
Equation 11 may be rewritten in any of the following ways depending on the
problem being solved:
Example: The displacement from the infinity position of a 50mm (2 inch) lens
focused at 10 feet is as follows:
Converting all distances to inches and applying Equation 17 yields
2 SHOOTING PRACTICES
2.1 Running times, feet, and frames
The Table 2 shows the linear sound speed of common theatrical film gages
and the number of frames per foot.
Example: How many feet of 35 mm film run through a sound camera in 4 and a
half minutes?
Example: How much film goes through a 16mm sound camera in 8 seconds?
To convert the decimal to frames, multiply the decimal part by number of
frames per foot:
Example: What is the exposure time when shooting at 24 frames per second
with a 180° shutter?
Notes:
Field of view and object velocity must be in same type
measurement (inches, feet) Falling objects will increase in
velocity (use velocity charts to determine event time)
Shooting frame rate depends on, among other things, subject matter, direction
of movement in relation to the camera position, and minature scale. Generally,
however, the smaller the minature, the faster the required frame rate. Also as
magnification decreases, the necessary frame rate drops.
The following may be used as a guide and a starting point:
Example: What frame rate should be used to shoot a 1:4 (quarter scale)
miniature?
2.4 Image blur
When shooting high speed photography, the question often comes up: what
frame rate is needed to reduce the image blur to an acceptable amount. This
question is especially important to cinematographers shooting fast moving
subject such as missile tests, horse races, airplanes, etc.
The following equations may be used to calculate the blur of an image caused
by the movment of an object during exposure:
Example: What is the image blur of a thin vertical line painted on the side of a
racing car moving at 153 miles per hour when shot from 33.3 feet away with a 2
inch lens at 48 fps with a shuter angle of 180°?
First, all of the units have to be brought to common units; in this case inches
and seconds are a good choice. Therefore, D is 400 inches (12 × 33.3) and v is
2693 in/sec (153 × 5280 3 12 ÷ 3600)
3 LIGHT AND EXPOSURE
3.1 Units for measuring light
The terms used to measure light can often be confusing. The three main
measures of light are intensity, illumination, and brightness. Each refers to a very
different characteristic.
Example: How many foot candles are required to expose an ASA 200 film
shooting at 24 fps at f/4.0?
Example: A 3 inch lens is moved 150mm further from the film plane by a
bellows. The exposure time before the lens was moved was 1⁄48 of a second.
What is the new exposure time?
First the units must be made the same. A 3 inch lens has a focal length of
76.2mm (3 × 25.4). Then using the above equation
Then multiply the old exposure time of 1⁄48 by ∆ to get the new exposure time
of
T hese comprehensive “All Formats Depth of Field Tables” will provide you
with depth of field information for virtually all of the currently used 16, 35,
and 65mm lenses. You will also find tables for Super 8mm and additional 16mm
tables beginning on page 800.
Please see the chapter on lenses for a more thorough discussion of this subject.
These tables are computed mathematically, and should be used as a guide
only. Depth of field is a useful concept within limits. Technically speaking, an
object is only in focus at one precise point in space. Depth of field determines
the range in front of and behind a designated focusing distance, where an object
still appears to be acceptably in focus. A low resolving film stock or lens may
appear to have greater depth of field, because the “in focus” image is already so
soft, it is more difficult to determine when it goes further out of focus.
Conversely, a very sharp, high contrast lens may appear to have shallow depth of
field, because the “in focus” image has such clarity, it is much easier to notice
when it slips out of a range of acceptable focus.
These tables can be nothing other than generic in their information. They will
give you a very close approximation of any given focal length’s depth of field
characteristics. Truly precise Depth of Field calculations cannot be performed
without knowing the size of a specific lens’ entrance pupil.
That being said, these charts should be very helpful, unless you are trying to
measure down to an accuracy of less than a couple of inches. If you are
demanding that level of precision, then you must shoot a test of the lens in
question, because no calculation can provide the empirical data a visual
evaluation can.
These tables are calculated based on a circle of confusion of 0.001 (1⁄1000). To
calculate a depth of field based upon a more critical circle of confusion of half
that size (0.0005 or 5⁄10,000), find your chosen f-stop at the distance desired, then
read the depth of field data two columns to the left. The 0.0005 circle of
confusion can be used for lenses of greater sharpness or contrast, or for a more
traditional method of determining 16mm depth of field.
One more note: you will see some lenses at certain distances that indicate a
depth of field of effectively nothing (e.g.: 10 0 to 10 0). This means the depth of
field is less than an inch, and we recommend that a test is shot to determine such
a critical depth of field. Charts should never be relied upon when exploring such
narrow fields of focus.
For further discussion on this subject see pages 136 and 140.
These tables were compiled with the invaluable help of Evans Wetmore, P.E.
Evans is Vice-President of Advanced Engineering in the News Technology
Group, of Fox NewsCorp. A fellow of SMPTE; his feature film credits include the
special effects engineering for Star Trek, the Motion Picture, Blade Runner, and
Brainstorm.
Handheld Apps For Production Use
by Taz Goldstein
ADibu
http://www.fatslimmer.com/
Helps filmmakers convert frames, feet, and time code and see the result of all
three simultaneously. The film counter calculator can be used as a stopwatch that
counts time (precision milliseconds), length in feet and total frames.
Photo Tools
ADibu
http://www.fatslimmer.com/
The app contains 15 calculators, good for any format, digital or film SLR
camera. Calculations include exposure, circle of confusion, depth of field,
magnification, angle of view, field of view, flash guide number and aperture,
camera pixels, aperture average, stops difference, and an exposure unit
converter.
Acacia
Ephemerald Creative Arts
http://www.ephemerald.com/
Provides a depth of field calculator, equipment management, shot logging and a
rudimentary slate.
CamCalc
Go Visual, Inc.
http://www.govisualinc.com/
Multifunction app that provides calculator for depth of field, field of view, focal
length equivalents, flash exposure calculations, color temperature conversion,
miniature photography, and solar calculations (including sun path).
Depth of Field Calculator
Allen Zhong
Photographer’s depth of field calculator.
On The Level 4
Stephen Lebed
http://apps.mechnology.com/my-apps/
A combination digital inclinometer and bubble level. Measurements are
calibrated to a hundredths of a degree accuracy.
SL DigiSlate
Stephen Lebed
http://apps.mechnology.com/my-apps/
A digital movie slate (clapper board) with integrated shot logging. Logs can be
edited and emailed. Slate information can be entered manually, and advanced
with simple controls.
SL Director’s Viewfinder
Stephen Lebed
http://apps.mechnology.com/my-apps/
A virtual director’s viewfinder that uses the Android’s built-in camera to
simulate multiple cameras, formats and lenses.
IOS
LightMeter
Ambertation
http://iphone.ambertation.de/lightmeter/
Turns an iPhone’s camera into a exposure meter. The app allows you to change
the f-stop, shutter or iOS values after you’ve measured the scene without altering
the exposure. Also allows users to include filter parameters.
Panascout
Panavision
http://www.panascout.com
Allows filmmakers to capture images of a given location, while recording GPS
data, compass heading, the current date, and sunrise/sunset times. You can then
share the images and data in a variety of ways.
Catchlight
Ben Syverson
http://bensyverson.com/software/catchlight/
Turns your mobile iOS device into a color programmable light source. It can be
used as a catchlight/eye light, or as a mini softbox for low-light photography.
Cinemek Storyboard Composer HD
Cinemek Inc.
http://www.cinemek.com/storyboard/
A mobile storyboarding and previsualization app that allows users to acquire
photos with their phones, and then add traditional storyboarding markups such as
dolly, track, zoom and pan. Users can reorder panels, add stand-ins, set panel
durations, enter text notes, record audio, and then play it all back to get real time
feedback on pacing and framing. Storyboards can be exported as animated
movies or as PDF files.
Clinometer
Peter Breitling
http://www.plaincode.com/
A professional angle/slope measurement app for mobile iOS devices. This
virtual clinometer offers many manual and automatic features as well as a
variety of informational displays.
Focalware
Spiral Development Inc.
http://spiraldev.com/focalware/
Focalware calculates sun and moon position for a given location and date. Use
the interactive compass to determine the path and height of the sun or moon.
Gel Swatch Library
Wybron, Inc.
http://www.wybron.com
Lets lighting production personnel browse, search, and compare more than 1,000
gel color filters made by the following manufacturers: Apollo, GAM, Lee, and
Rosco. Users can compare similar and complementary colors, as well as
examine each color’s detailed Spectral Energy Distribution graphs.
Helios Sun Position Calculator
Chemical Wedding
http://www.chemicalwedding.tv/helios.html
Helios is a sun position calculator and compass that graphically represents the
position of the sun from dusk to dawn, on any given day, in any given place,
without the need for complex tables or graphs. Four modes of operation allow
users to view graphical representations of the sun’s predicted position, elevation,
and path in the sky. It can also calculate the proportional lengths of the shadows
being cast.
Light Calc
D!HV Lighting
http://www.dhvproductions.com/
Light Calc is a photometric calculation tool featuring a database of commonly
used theatrical lighting fixtures. Users can select a type of lighting fixture,
choose one of several lamp types, and set a throw distance. The calculator will
then return a beam diameter and field diameter, in feet, as well as center field
illumination, in footcandles.
MatchLens
Indelible Pictures, Inc.
http://web.me.com/donmatthewsmith/Site/MatchLens.html
This calculator computes the equivalent lens focal length to produce the same
field of view between two cameras with different aperture/sensor sizes. It will do
a “Match Lens” calculation, and produce the closest equivalent angle of view
lens, in millimeters, for both vertical and horizontal frames, between the original
camera’s focal length and the current camera’s focal length.
pCAM Film/Digital Calculator
David Eubank
http://www.davideubank.com
A well-known film and video tool that performs a wide variety of calculations
including: depth of field, hyperfocal, splits/aperture finder, field of view, field of
view preview, angle of view, focal length matching, exposure, running time to
length, shooting to screen time, HMI flicker-free, color correction (choosing
filters), diopter, macro, time lapse, underwater distance, scene illumination
(beam intensity), light coverage (width/distance), mired shift (with suggested
color correction gels), and more.
Photo fx
The Tiffen Company
http://www.tiffen.com/photofx_homepage.html
Lets users apply multiple effects to still photos. Filters include simulations of
many popular Tiffen glass filters, specialized lenses, optical lab processes, film
grain, color corrections, natural light and photographic effects.
Pocket DIT
Clifton Production Services
http://www.cliftonpost.com
A multifunction app that provides a RED ONE virtual menu navigator, a depth
of field calculator (with near focus, far focus, and hyperfocal distances for
16mm, 35mm, and RED formats), a transfer time calculator, a storage calculator,
a time calculator that lets users determine the maximum frame-rate/timebase that
can be recorded given a particular RED configuration,and a maximum fps
indicator that helps users determine the maximum frame rate/time base that can
be recorded given a particular RED configuration.
PocketLD
Michael Zinman
http://www.lightingiphoneapps.com
Pocket LD is a photometric database and calculation tool for lighting
professionals. It’s large, searchable, user-expandable fixture database can be
referenced while organizing easy-to-manage lists of commonly used items.
Additionally, users can enter any throw distance to determine beam/field
diameter and footcandles/lux for any selected fixture and lamp.
PowerCalc
West Side Systems
http://westsidesystems.com/iphone/
PowerCalc performs basic electrical power calculations with watts, volts, amps,
and motor power factor. It has three modes: DC mode, AC Resistive mode, and
AC Inductive mode. It works for any voltage, in any country.
Wrap Time Lite
RedPipe Media
http://redpipemedia.com
Can help crew members keep track of hours, pay, and job information. Users can
save their call, meal, and wrap times. The app will then calculate a users pay,
overtime, and meal penalties according to the provided job information. Various
options allow users to customize the experience, and include additional expenses
and discounts.
mRelease
being MEdia, LLC
http://www.mReleaseApp.com/
mRelease helps users obtain appearance releases, property releases, location
release, and crew releases. After setting up the app, users can add details about
their subject, import an image from the built-in camera or photo library, and
capture a signatures via their device’s touch screen. The app creates, stores, and
emails PDF files of the signed releases.
Toland ASC Digital Assistant
Chemical Wedding in Partnership with the ASC
http://www.chemicalwedding.tv
A full featured, multi-tasking, photographic and camera calculation system.
Unlike single-function calculators that answer specific questions, Toland is
designed to track your photographic choices as you make them. It serves as a
reflection of your entire photographic system. As you change the camera speed,
you get feedback on how it affects running time and exposure; when you change
lenses, depth of field and field of view automatically updates. The app will also
log information and build comprehensive camera reports.
Camera Order
Practical Applications
http://www.practical-applications.com/
This app offers cinematographers and camera assistants the ability to create
complete camera package lists and email them straight from the app to
production or the rental house. It features a complete list of cameras, lenses,
accessories, filters, support, film and media.
Bento
FileMaker, Inc.
http://www.filemaker.com/products/bento/
A simple database application that easily syncs with its Mac-based counterpart.
Since most databases are user generated, the app can be used to track just about
anything (i.e., equipment, supplies, crew, locations, camera logs, etc.). Since all
user data lives on the device (and possibly on a synched Mac computer), and not
in the cloud, no Internet connection is required to view or edit a database.
GoodReader for iPhone
Good.iWare Ltd.
http://www.goodreader.net/goodreader.html
A very popular document reader for iOS devices that allows for easy reviewing,
bookmarking, and annotating of PDF files (i.e., scripts, call sheets, camera logs,
etc.). Documents can be imported and exported in a wide variety of ways.
TechScout Touch
LiteGear Inc.
http://www.litegear.com/techscout
This app helps lighting professionals create rental orders intended to be
submitted to rental houses or to studio set lighting departments. Users enter basic
info about the job, and then being adding items to their list. The app includes
over 1000 lighting equipment items separated into categories and sorted by type
and wattage. New items can also be entered manually. The resulting equipment
list can be e-mailed directly from the app.
Movie*Slate
PureBlend Software
http://www.pureblendsoftware.com/movieslate
A powerful, multifunction digital slate (clapper board) that can sync to camera
time code, generate new time code, playback synced music, log shots with
extensive notes, and export those logs to editing systems like Final Cut Pro and
Avid. The app can also wirelessly sync with other iOS devices running
Movie*Slate.The base price does not include certain time-code features.
Easy Release
ApplicationGap
http://www.applicationgap.com
Easy Release helps users obtain a variety of releases (i.e., talent, location, etc.)
using customizable forms. After setting up the app, users can add details about
their subject, import an image from the built-in camera or photo library, and
capture a signatures via their device’s touch screen. Model and witness
information can be imported directly from your device’s contact list. The app
creates, stores, and e-mails PDF files of the signed releases.
PDF Expert for iPad
Readdle
http://readdle.com
A very popular document reader for iOS devices that allows for easy reviewing,
bookmarking, and annotating of PDF files (i.e., scripts, call sheets, camera logs,
etc.). Documents can be imported and exported in a wide variety of ways.
Artemis Remote for iPad
Chemical Wedding
http://www.chemicalwedding.tv/artemis.hml
Artemis Remote for iPad can wirelessly receive streaming video from an iPhone
(or iPod Touch) running Artemis Director’s Viewfinder. This allows many
people to remotely view whatever the iPhone (or iPod Touch) user is seeing.
Artemis Remote users can also change lenses, and capture pictures to their
iPad’s image gallery.
OmniGraffle
The Omni Group
http://www.omnigroup.com/products/omnigraffle-ipad/
Helps people create diagrams and charts using pre-existing or original graphic
elements. There are collections of film and video related elements available for
free online. Omnigraffle is very useful when blocking camera and actor
movements, designing lighting grids, or creating wiring diagrams.
TouchDraw
Elevenworks, LLC
http://elevenworks.com/touchdraw
TouchDraw is a powerful, easy to use, illustration and drawing application for
iPad that can help users create diagrams for camera blocking, actor blocking, and
lighting setups. Like Omnigraffle but with fewer features at a lower cost.
AJA DataCalc
AJA Video Systems, Inc.
http://www.aja.com/
Computes storage requirements for professional video and audio media. The app
works with all the most popular industry video formats and compression
methods, including Apple ProRes, DVCProHD, HDV, XDCAM, DV, CineForm,
REDCODE, Avid DNxHD, Apple Intermediate, 16-bit RGB and RGBA,
uncompressed, and more. Video standards supported include NTSC, PAL, 1080i,
1080p, 720p, 2K and 4K.
Almost DSLR
Rainbow Silo
http://www.rainbowsilo.com/
Gives more control to users shooting HD video with their iPhone or iPod Touch.
Unlike the built-in camera app, almost DSLR lets users lock focus, lock iris, and
lock white balance. Users can also monitor audio levels, show/hide grids, and
control the built-in camera light.
BigStopWatch & BigStopWatch HD
Objective-Audio
http://objective-audio.jp/apps/
A large, graphic stopwatch that features an easy to control and read interface.
The app also provides a lap timer and countdown timer.
Electrical Toolkit
Niranjan Kumar
http://iappsworld.com/site/iApps.html
Calculates circuit values, and instantly updates when users edit data.
Sun Seeker: 3D Augmented Reality Viewer
ozPDA
http://www.ozpda.com/
Reports the sun’s position and path on a flat compass view, and on an augmented
reality view which displays the app’s data over a live view from the device’s
camera (as a user moves the device, the app updates the overlaid information).
WeatherBug for iPad
WeatherBug
http://weather.weatherbug.com/
Displays current local weather conditions, as well as extended forecasts, severe
weather alerts, an animated Doppler radar, and live weather camera images.
IOS & ANDROID
Artemis Directors Viewfinder
Chemical Wedding
http://www.chemicalwedding.tv/artemis.html
Artemis is a digital director’s viewfinder that works much the same way as a
traditional director’s viewfinder, though much more accurately. Users can select
camera format, aspect ratio, and lens types. Using your device’s built-in camera,
Artemis will simulate the lens views you can expect when ready to shoot. Users
can switch between virtual lenses, save shots, and wirelessly transmit video to
Artemis Remote for iPad (wireless transmission not available on Android).
KODAK Cinema Tools
Eastman Kodak Co.
http://motion.kodak.com/US/en/motion/Tools/Mobile/index.html
The app features a Depth of Field Calculator (works any film format, including
Super 8, 16mm, 35mm and 65mm), a film calculator that helps with footage
computations, and broad film/video glossary. The included contact tool lets users
quickly contact a Kodak representative online to get their technical questions
answered.
WEB APP
Video Storage Calculator
Zebra Films
http://www.zebrafilm.com/products/filmcalculator/
filmcalculator-for-windows-mobile
Contains over thirty-two calculations, as well as a film stock and camera
database. The app also contains a large lamp database with full details and
calculations.
Quick Picture Monitor Set-Up
by Lou Levinson
ASC Associate Member
Lou Levinson has spent more than a quarter century as a top feature-film
colorist. Levinson is the chair of the DI subcommittee of the ASC Technology
Committee. He took four years off to do HD research for MLA/MEI at Universal
Studios. Since 1998, he has been the senior colorist for feature masters and
digital intermediates at Post Logic. He holds a MFA from the Art Institute of
Chicago.
Further References
3-D
Spottiswood, Raymond, Theory of Stereoscopic Transmission, Berkeley, CA; University of
California Press, 1953.
Aerial Cinematography
Wagtendonk, W.J., Principles of Helicopter Flight, Aviation Supplies & Academics, Newcastle,
WA, 1996.
Crane, Dale, Dictionary of Aeronatical Terms, Aviation Supplies & Academics, Newcastle, WA,
1997.
Spence, Charles, Aeronautical Information Manual and Federal Aviation Regulations,
McGraw-Hill, New York, NY, 2000.
Padfield, R., Learning to Fly Helicopters, McGraw-Hill, New York, NY, 1992.
Industry-Wide Labor-Management Safety Bulletins at: http://www.csatf.org/bulletintro.shtml
Arctic Cinematography
Eastman Kodak Publication: Photography Under Artic Conditions.
Fisher, Bob, “Cliffhanger’s Effects were a Mountainous Task,” American Cinematographer,
Vol. 74, No. 6, pp. 66-74, 1993.
Miles, Hugh, “Filming in Extreme Climactic Conditions,” BKSTS Journal Image Technology,
February 1988.
Moritsugu, Louise, “Crew’s Peak Performance Enhanced Alive,” American Cinematographer,
Vol. 74, No. 6, pp. 78-84, 1993.
Camera
Adams, Ansel, The Camera, New York, Morgan and Morgan, Inc., 1975.
Fauer, ASC, Jon, Arricam Book, Hollywood, CA; ASC Press, 2002.
Fauer, ASC, Jon, Arriflex 16 SR Book, Boston, MA; Focal Press, 1999.
Fauer, ASC, Jon, Arriflex 16 SR3 the Book, Arriflex Corp., 1996.
Fauer, ASC, Jon, Arriflex 35 Book, Boston, MA; Focal Press, 1999.
Fauer, ASC, Jon, Arriflex 435 Book, Arriflex Corp., 2000.
Samuelson, David W., Panaflex Users’ Manual, Boston, MA; Focal Press, 1990
Camera Manufacturers
Canon www.canon.com
Aaton, +33 47642 9550, www.aaton.com
ARRI, (818) 841-7070, www.arri.com
Fries Engineering, (818) 252-7700
Ikonoskop AB, +46 8673 6288, [email protected]
Panasonic, www.panasonic.com
Panavision, (818) 316-1000, www.panavision.com
Photo-Sonics, (818) 842-2141, www.photosonics.com
Pro8mm, (818) 848-5522, www.pro8mm.com
Red, +1-949-206-7900, www.red.com
Silicon Imaging, www.siliconimaging.com
SONY, pro.sony.com
Vision Research, www.visionresearch.com
Camera Supports
A + C Ltd., +44 (0) 208-427 5168, www.powerpod.co.uk
Aerocrane, (818) 785-5681, www.aerocrane.com
Akela: Shotmaker, (818) 623-1700, www.shotmaker.com
Aquapod, (818) 999-1411
Chapman/Leonard Studio Equipment, (888) 883-6559, www.chapman-leonard.com
Egripment B.V., +31 (0)2944-253.988, Egripment USA, (818) 787-4295, www.egripment.com
Fx-Motion, +32 (0)24.12.10.12, www.fx-motion.com
Grip Factory Munich (GFM), +49 (0)89 31901 29-0, www.g-f-m.net
Hot Gears, (818) 780-2708, www.hotgears.com
Hydroflex, (310) 301-8187, www.hydroflex.com
Company, (818) 752-3104, www.isaia.com
J.L. Fisher, Inc., (818) 846-8366, www.jlfisher.com
Jimmy Fisher Co., (818) 769-2631
Libra, (310) 966-9089
Louma, +33 (0)1 48 13 25 60, www.loumasystems.biz
Megamount, +44 (0)1 932 592 348, www.mega3.tv
Movie Tech A.G., +49 0 89-43 68 913, Movie Tech L.P., (678) 417-6352, www.movietech.de
Nettman Systems International, (818) 623-1661, www.camerasystems.com
Orion Technocrane, +49 171-710-1834, www.technocrane.de
Pace Technologies, (818) 759-7322, www.pacetech.com
Panavision Remote Systems, (818) 316-1080, www.panavision.com
Panther, +49 89 61 39 00 01, www.panther-gmbh.de
Pictorvision, (818) 785-9282, www.pictorvision.com
Spacecam, (818) 889-6060, www.spacecam.com
Strada, (541) 549-4229, www.stradacranes.com
Straight Shoot’r, (818) 340-9376, www.straightshootr.com
Technovision, (818) 782-9051, www.technovision-global.com
Cinematography
Brown, Blain, Cinematography, Boston, MA; Focal Press, 2002.
Campbell, Russell, Photographic Theory for the Motion Picture Cameraman, London, Tantivy
Press, 1970.
Campbell, Russell, Practical Motion Picture Photography, London, Tantivy Press, 1970.
Carlson, Verne and Sylvia, Professional Cameraman’s Handbook, 4th edition, Boston, MA;
Focal Press, 1994.
Clarke, ASC, Charles G., Professional Cinematography, Hollywood, CA; ASC Press, 2002.
Cornwell-Clyne, Major Adrian, Color Cinematography, 3rd edition, Chapman Hall LTD 1951.
Malkiewicz, Kris J.and Mullen, M. David, Cinematography: A Guide for Filmmakers and Film
Teachers, New York, Fireside, 2005.
Mascelli, ASC, Joseph V., The 5 C’s of Cinematography, Beverly Hills, CA, Silman-James
Press, 1998 (c1965).
Wilson, Anton, Anton Wilson’s Cinema Workshop, Hollywood, CA; ASC Press, 1983, 1994.
Color
Albers, J., Interaction of Color, New Haven and London; Yale University Press, 1963.
Eastman Kodak Publication H-12, An Introduction to Color, Rochester, 1972.
Eastman Kodak Publication E-74, Color As Seen and Photographed, Rochester, 1972.
Eastman Kodak Publication H-188, Exploring the Color Image, Rochester.
Evans, R. M., An Introduction to Color, New York, NY; John Wiley & Sons, 1948.
Evans, R. M., Eye, Film, and Camera Color Photography, New York, NY; John Wiley & Sons,
1959.
Evans, R. M., The Perception of Color, New York, NY; John Wiley & Sons, 1974.
Friedman, J. S., History of Color Photography, Boston, MA; American Photographic
Publishing Company, 1944.
Hardy, A. C., Handbook of Colorimetry, MIT, Cambridge, MA; Technology Press, 1936.
Hunt, R. W. G., The Reproduction of Colour, Surrey, UK, Fountain Press, 1995.
Itten, J., The Art of Color, New York, Van Nostrand Reinhold, 1973.
National Bureau of Standards Circular 553, The ISCC-NBS Method of Designating Colors and
A Dictionary of Color Names, Washington D. C., 1955.
Optical Society of America, The Science of Color, New York, NY; Thomas Y. Crowell
Company, 1953.
Society of Motion Picture and Teclevision Engineers, Elements of Color in Professional Motion
Pictures, New York, NY, 1957.
Wall, E. J., History of Three-Color Photography, New York and London, Boston, MA;
American Photographic Publishing Company, 1925.
Film
Adams, Ansel, The Negative, New York, Little Brown, 1989.
Adams, Ansel, The Print, New York, Little Brown,1989.
Eastman Kodak Publication H-1: Eastman Professional Motion Picture Films.
Eastman Kodak Publication H-23: The Book of Film Care.
Eastman Kodak Publication H-188: Exploring the Color Image.
Eastman Kodak Publication N-17: Infrared Films.
Eastman Kodak Publication: ISO vs EI Speed Ratings.
Eastman Kodak Publication: Ultraviolet and Fluorescence Photography.
Hayball, Laurie White, Advanced Infrared Photography Handbook, Amherst Media, 2001.
Hayball, Laurie White, Infrared Photography Handbook, Amherst Media, 1997.
Film Design
Affron, Charles and Affron, Mirella Jona, Sets in Motion, Rutgers University Press, 1995.
Carrick, Edward, Designing for Films, The Studio LTD and the Studio Publications Inc, 1941,
1947.
Carter, Paul, Backstage Handbook, 3rd edition., Broadway Press, 1994.
Cruickshank, Dan, Sir Banister Fletcher’s A History of Architecture, 20th edition, New York,
NY, Architectural Press, 1996.
Edwards, Betty, Drawing on the Right Side of the Brain, revised edition, Jeremy P. Tarcher,
1989.
de Vries, Jan Vredeman, Perspective, Dover Publications, 1968.
Heisner, Beverly, Studios, McFarland and Co., 1990.
Katz, Stephen D., Shot by Shot – Visualizing from Concept to Screen, Boston, MA; Focal Press,
1991, pp. 337-356.
Preston, Ward, What an Art Director Does, Silman-James Press, 1994.
Raoul, Bill, Stock Scenery Construction Handbook, 2nd edition, Broadway Press, 1999.
St John Marner, Terrance, Film Design, The Tantivy Press, 1974.
Film History
The American Film Institute Catalog: Feature Films 1911–1920, Berkeley and Los Angeles,
University of California Press, 1989.
The American Film Institute Catalog: Feature Films 1931–1940, Berkeley and Los Angeles,
University of California Press, 1993.
The American Film Institute Catalog: Feature Films 1921–1930, Berkeley and Los Angeles,
University of California Press, 1997.
The American Film Institute Catalog: Feature Films 1961–1970, Berkeley and Los Angeles,
University of California Press, 1997.
The American Film Institute Catalog: Within Our Gates: Ethnicity in American Feature Films
1911–1960, Berkeley and Los Angeles, University of California Press, 1989.
The American Film Institute Catalog: Feature Films 1941–1950, Berkeley and Los Angeles,
University of California Press, 1999.
Belton, John, Widescreen Cinema, Cambridge, MA; Harvard University Press, 1992.
Brownlow, Kevin, Hollywood the Pioneers, New York, NY; Alfred A. Knopf, 1979.
Brownlow, Kevin, The Parade’s Gone By, New York, Knopf, 1968.
Coe, Brian, The History of Movie Photography, New York, Zoetrope, 1982.
Fielding, Raymond, A Technological History of Motion Pictures and Television, University of
California Press, 1967.
Finler, Joel W., The Hollywood Story, New York, Crown, 1988.
Ryan, R.T., A History of Motion Picture Color Technology, London, Focal Press, 1977.
MacGowan, Kenneth, Behind the Screen: the History and Techniques of the Motion Picture,
New York, Delacorte Press, 1965.
Rotha, Paul and Griffith, Richard, The Film Till Now: A Survey of World Cinema, London,
Spring Books, 1967. (New York, Funk & Wagnalls, 1951.) Schatz, Thomas, The Genius of the
System: Hollywood Filmmaking in the Studio Era, New York, Pantheon, 1988.
Turner, George E., The Cinema of Adventure, Romance and Terror, Hollywood, CA; ASC
Press, 1989
Film Processing
ACVL Handbook, Association of Cinema and Video Laboratories.
Case, Dominic, Motion Picture Film Processing, London, Butterworth and Co. Ltd. (Focal
Press), 1985.
Eastman Kodak publications: H-1, H-2, H-7, H-17, H-21, H-23, H-24.07, H-26, H-36, H-37,
H-37A, H-44, H-61, H-61A, H-61B, H-61C, H-61D, H-61E, H-61F, H-807 and H-822.
Happe, L. Bernard, Your Film and the Lab, London, Focal Press, 1974.
Kisner, W.I., Control Techniques in Film Processing, New York, SMPTE, 1960.
Ryan, R.T., Principles of Color Sensitometry, New York, SMPTE, 1974.
Filters
Eastman Kodak Publication B-3: Filters.
Harrison, H.K., Mystery of Filters-II, Porterville, CA; Harrison & Harrison, 1981.
Hirschfeld, ASC, Gerald, Image Control, Boston, MA; Focal Press, 1993.
Hypia, Jorma, The Complete Tiffen Filter Manual, AmPhoto, New York, 1981.
Smith, Robb, Tiffen Practical Filter Manual.
Tiffen Manufacturing Corporation Publication T179: Tiffen Photar Filter Glass
Lenses
Angenieux, P., “Variable focal length objectives,” U.S. Patent No. 2,847,907, 1958.
Bergstein, L., “General theory of optically compensated varifocal systems,” JOSA Vol. 48, No.
9, pp. 154-171, 1958.
Cook, G.H.,”Recent developments in television optics,” Royal Television Society Journal, pp.
158-167, 1973.
Cox, Arthur, Photographic Optics, A Modern Approach to the Technique of Definition,
expanded edition, London, Focal Press, 1971.
Kingslake, R. “The development of the zoom lens,” SMPTE Vol. 69, pp. 534-544, 1960.
Mann, A., Ed., “Zoom lenses,” SPIE Milestone Series Vol. MS 85, 1993.
Neil, I.A. and Betensky, E.I, “High performence, wide angle, macro focus, zoom lens for
35mm cinematography,” SPIE Vol. 3482, pp. 213-228, Kona, Hawaii, U.S.A., 1998.
Neil, I.A., “First order principles of zoom optics explained via macro focus conditions of fixed
focal length lenses,” SPIE Vol. 2539, San Diego, California, U.S.A., 1995.
Neil, I.A., “Liquid optics create high performance zoom lens,” Laser Focus World, Vol. 31, No.
11, 1995.
Neil, I.A., “Uses of special glasses in visual objective lenses,” SPIE Vol. 766, pp. 69-74, Los
Angeles, California, U.S. A., 1987.
Zuegge, H. and Moellr, B., “A complete set of cinematographic zoom lenses and their
fundamental design considerations,” Proceedings of the 22nd Optical Symposium, pp. 13-16,
Tokyo, Japan, 1997.
Lighting
Adams, Ansel, Artificial Light Photography, New York, Morgan and Morgan, Inc., 1956.
Alton, John, Painting With Light, Berkeley and Los Angeles, University of California Press,
1995.
Bergery, Benjamin, Reflections – 21 Cinematographers at Work, Hollywood, CA; ASC Press,
2002.
Box, Harry, Set Lighting Technician’s Handbook, Boston, MA, Focal Press, 2003.
Malkiewicz, Kris J., Film Lighting: Talk with Hollywood’s Cinematographers and Gaffers,
New York, Touchstone, a Division of Simon & Schuster, 2012.
Millerson, Gerald, The Technique of Lighting for Television and Film, Boston, Focal Press,
1991
Miscellaneous
Arnheim, Rudolf, Art and Visual Perception, Berkley, CA, University of California Press,
1974.
Darby, William, Masters of Lens and Light: A Checklist of Major Cinematographers and Their
Feature Films, Metuchen, NJ, Scarecrow Press, 1991.
Houghton, Buck, What a Producer Does, Silman-James Press, 1991.
Kehoe, Vincent J. R., The Technique of the Professional Makeup Artist, Boston, MA, Focal
Press, 1995.
Kepes, Gyorgy, Language of Vision, New York, MA, Dover Publications, 1995.
Moholy-Nagy, L., Vision in Motion, Wisconsin; Cuneo Press, 1997.
Nilsen, Vladimir, The Cinema as a Graphic Art, New York; Garland Pub., 1985.
Waner, John, Hollywood’s Conversion of All Production to Color Using Eastman Color
Professional Motion Picture Films, Newcastle, ME; Tobey Publishing, 2000.
Photography
Evans, R.M., W.T. Hanson Jr., and W.L. Brewer, Principles of Color Photography, New York,
John Wiley & Sons Inc., 1953.
Mees, C.E.K., The Theory of the Photographic Process, New York, Macmillan, 1977.
Thomas Jr., Woodlief, SPSE Handbook of Photographic Science and Engineering, New York,
John Wiley & Sons, 1973.
Woodbury, Walter E., The Encyclopaedic Dictionary of Photography, New York, The Scovill
and Adams Company, 1898.
Underwater Cinematography
Mertens, Lawrence, In Water Photography: Theory and Practice, Wiley Interscience, New
York, John Wiley & Sons, 1970.
Ryan, R.T., Underwater Photographic Applications – Introduction, SMPTE Journal, Vol. 82,
No. 12, December 1973.
Industry-Wide Labor-Management Safety Bulletins at: http://www.csatf.org/bulletintro.shtml
Visual Effects
Abbott, ASC, L.B., Special Effects with Wire, Tape and Rubber Bands, Hollywood, CA; ASC
Press, 1984.
Bulleid, H.A.V. (Henry Anthony Vaughan), Special Effects in Cinematography, London,
Fountain Press, 1960.
Clark, Frank P., Special Effects in Motion Pictures Some Methods for Producing Mechanical
Effects, New York, SMPTE, 1966.
Dunn, ASC, Linwood, and Turner, George E., ASC Treasury of Visual Effects, Hollywood, CA;
ASC Press,1983.
Fielding, Raymond, The Technique of Special Effects Cinematography, Boston, MA; Focal
Press, 1985.
Glover, Thomas J., Pocket Ref, Littleton, CO, Sequoia Publishing, 1997.
Harryhausen, Ray, Ray Harryhausen: An Animated Life, New York, NY, Billboards Books,
2004.
Rogers, Pauline B., The Art of Visual Effects: Interviews on the Tools of the Trade, Boston,
MA; Focal Press, 1999.
The Nautical Almanac, commercial edition, Arcata, CA, Paradise Cay Publications (yearly).
Vaz, Matt Cotta and Barron, Craig, The Invisible Art: The Legends of Movie Matte Painting,
San Francisco, CA; Chronicle Books, 2002.
INDEX