Cinematography - Theory and Practice, 4th Edition
Cinematography - Theory and Practice, 4th Edition
Cinematography - Theory and Practice, 4th Edition
blain brown
cinematography
theory and practice
for cinematographers and directors
blain brown
fourth edition
Fourth edition published 2022
by Routledge
2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN
and by Routledge
605 Third Avenue, New York, NY 10158
The right of Blain Brown to be identifed as author of this work has been asserted by
him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act
1988.
All rights reserved. No part of this book may be reprinted or reproduced or utilised
in any form or by any electronic, mechanical, or other means, now known or hereaf-
ter invented, including photocopying and recording, or in any information storage or
retrieval system, without permission in writing from the publishers.
Publisher’s Note
This book has been prepared from camera-ready copy provided by the Author
Cover photos:
Upper—Joker, DP Lawrence Sher
Lower—Family History, DP Blain Brown
INTRODUCTION
The technology of flmmaking has changed radically in the last 20 years
since George Lucas shot the frst all digital HD movie. Using the new tech-
nology as a DP, director, editor, and postproduction artist is still evolv-
ing, and exciting new developments emerge regularly. The changes create
new opportunities and possibilities. A bonus is that cameras have become
smaller and lighter; they can be moved in new ways and ft in to smaller
places—a fact beautifully utilized by cinematographer Roger Deakins on
the magnifcent flm 1917.
At the same time, most of the traditional skills are still critical to success
in the camera department. For the DP, a deep understanding of the tools,
techniques, and artistry of lighting is still essential. For the camera crew,
the protocols of ensuring that everything is good and proper with the
equipment is still critical. Focus and optics remain much the same and, of
course, elements of visual storytelling such as composition, camera move-
ment, color, and staging are as important to the overall success of a project
as they have ever been.
New challenges, new technology, and new tools to learn—these are
things the camera department has loved and embraced since the days of
Thomas Edison.
THE WEBSITE
Please be sure to visit the companion website for Cinematography: Theory
and Practice. On the website, you will fnd instructional videos, and ex-
amples of techniques discussed in this book, including camera essentials,
setting up shots, scene shooting methods, lighting techniques, a day on set,
and much more.
www.routledge.com/cw/brown
To view the instructional and demonstration videos, go to the website to
request an Access Token. If you have purchased an e-book version of the
book, please visit the website for further instructions. During the regis-
tration process, you will be prompted to create your own username and
password for access to the site. Please record this for future reference.
CONTENTS
Introduction v
The Website v
01 WRITING WITH MOTION 1
Writing With Motion 2
Building a World 2
The Visual Language of Cinematography 2
Storytelling With Light 4
Defning the Frame 4
The Lens and Space 5
Perspective 6
Light and Color 7
Movement 7
Camera Angle 8
Information 8
Detective POV 10
Visual Texture 11
Visual Metaphor 14
Naturalism vs Stylized 15
Putting It All Together 16
02 THE FRAME 17
More Than Just a Picture 18
Principles of Composition 19
Unity 20
Balance 20
Rhythm 20
Proportion 20
Contrast 20
Texture 20
Directionality 21
The Three-Dimensional Field 21
Creating Depth 21
Relative Size 22
Linear Perspective 22
Atmospheric Perspective 22
Forces of Visual Organization 23
The Line 23
Compositional Triangles 24
Horizontals, Verticals, and Diagonals 24
The Power of the Edge 25
Frame Within a Frame 26
Positive and Negative Space 26
Movement in the Visual Field 27
The Rule of Thirds 28
Rules of Composition for People 28
Headroom 28
Noseroom 29
Other Guidelines 29
Lens Height 29
Dutch Tilt 32
Aspect Ratios 33
03 LANGUAGE OF THE LENS 37
The Lens in Storytelling 38
Foreground/Midground/Background 39
vi..cinematography:.theory.and.practice.
Lens Perspective 39
The Lens and Space 40
Compression of Space 42
Selective Focus 44
Flare/Glare 46
04 CONTINUITY 47
Continuity 48
Shooting For Editing 48
Thinking about Continuity 48
Types of Continuity 49
Continuity of Content 49
Continuity of Movement 50
Continuity of Position 50
Continuity of Time 50
The Prime Directive 50
The Action Axis 51
Screen Direction 51
These Are the Rules—But Why? 51
Screen Direction 52
What Establishes the Line? 53
The Purpose of Screen Direction 53
Directional Conventions 53
Exceptions to the Rule 53
Reverse 55
Turnaround 55
Planning Coverage 58
Cuttability 59
The 20% and 30 Degree Rules 59
Moving Shots 60
Chase Scenes 61
Going Through a Door 61
Entering and Exiting Frame 62
Neutral Axis to Exit Frame 62
Prop Continuity in Coverage 63
Eye Sweeps 63
Group Shots 63
Cutaway Eyeline Continuity 64
Eyelines in Over-the-Shoulder Coverage 64
Eyelines for a Seated Character 64
05 SHOOTING METHODS 65
What Is Cinematic? 66
A Question of Perception 66
The Frame 67
Static Frame 67
Camera Angles 68
The Shots: Building Blocks of a Scene 68
Wide Shot 70
Establishing Shots 70
Character Shots 70
Full Shot 71
Two Shot 71
Medium Shot 71
Close-ups 71
Over-the-Shoulder 73
Cutaways 73
cinematography:.theory.and.practice..vii.
Reaction Shots 73
Inserts 74
Connecting Shots 77
Reveals 77
Pickups 77
Transitional Shots 77
Invisible Technique 77
Involving The Audience: POV 77
The Fourth Wall and POV 79
The Shooting Methods 80
The Master Scene Method 80
The Master Scene Method 81
In-One and Developing Master 82
The Developing Master 83
Freeform Method 85
Overlapping or Triple-Take Method 86
Walk and Talk 86
Montage 86
Establishing the Geography 87
Introductions 88
The Time 90
The Characters 90
Other Editorial Issues In Shooting 90
Jump Cuts 90
Types of Edits 91
The Action Cut 91
The POV Cut 92
The Match Cut 93
The Invisible Cut 94
06 CAMERAS 95
The Digital Signal Path 96
Digital Signal Processor 96
HD Recording 96
UHD 97
RAW vs Baked In 99
RAW Camera Signal Path 99
Viewing Stream 100
Defnitions 100
Digital Negative 101
Chroma Subsampling 103
Pixels 105
Resolution 105
Photosites 105
Pixels and Photosites Are Not the Same Thing! 106
Digitizing 106
OLPF 106
Digital Sensors 106
CCD 107
CMOS 108
3-Chip 108
Making Color from Black-and-White 108
Bayer Filter 108
Demosaicing/DeBayering 108
What Color Is Your Sensor? 110
viii..cinematography:.theory.and.practice.
How Many Pixels Is Enough? 111
Shooting Resolution 111
Shutters 112
Spinning Mirror 112
Rolling Shutter and Global Shutter 112
Sensor Size and Depth-of-Field 113
ISO in Digital Cameras 113
Noise 115
IR and Hot Mirror Filters 115
Bit Depth 117
Frame Rates 117
The Film Look vs The Video Look 118
07 MEASURING DIGITAL 119
The Waveform Monitor 120
External Sync 121
Types of Display 122
Color Bars In Detail 124
Using the PLUGE in Monitor Calibration 125
Monitor Probes 127
Legal and Valid 127
The Vectorscope 127
Hue/Phase 128
Using the Vectorscope On the Set 128
Color Bars On the Vectorscope 128
White Balance/Black Balance 129
Gamut 130
Video Test Cards 130
The Deceptively Simple Neutral Gray Card 130
Why Isn’t 18% Gray Also 50%? 132
Calibration Test Charts 133
DSC Labs Test Charts 133
The One Shot 133
The X-Rite ColorChecker 135
ChromaMatch & ScreenAlign 135
Skin Tone 135
Measuring Image Resolution 136
08 EXPOSURE 137
Exposure Theory 138
What Do We Want Exposure to Do for Us? 138
Controlling Exposure 138
Change the Bucket 140
The Elements of Exposure 140
Light 140
F/Stops 141
Shutter Speed/Frame Rate/Shutter Angle 141
The Response Curve 142
Underexposure 142
Overexposure 142
Correct Exposure 142
Higher Brightness Range in the Scene 143
Two Types of Exposure 144
How Film and Video Are Diferent 144
We’ll Fix It in Post 144
The Bottom Line 144
cinematography:.theory.and.practice..ix.
Exposure in Shooting RAW Video 144
Digital Exposure 146
The Tools of Exposure 146
The Incident Meter 146
The Refectance Meter 147
A Diferent World of Exposure 148
Setting Exposure with the Waveform Monitor 148
F/Stops On the Waveform 148
The 18% Solution 149
Exposure Indicators in The Camera 149
Zebras 149
Histogram 150
Trafc Lights and Goal Posts 150
False Color Exposure Display 152
Arri Alexa False Colors 153
Strategies of Exposure 154
Don’t Let It Clip, but Avoid the Noise 154
Texture & Detail 155
The Dilemma 156
Using Light Meters 157
Meter the Key 157
Using the Waveform Monitor 158
Placing Middle Gray 158
Start at the Bottom or Start at the Top 159
Expose to the Right 159
Zebras 160
The Monitor 160
Know Thyself and Know Thy Camera 161
Blackmagic Camera Exposure Advice 161
09 LINEAR, GAMMA, LOG 163
Dynamic Range 164
Linear Response 166
An Ideal and a Problem 166
Linear as Scene Referred 167
The Classic S-Curve In the Image 167
Film Gamma and Video Gamma 169
Video Gamma 169
The Coincidence 170
Rec 709 and Rec 2020 170
The Rec 709 Transfer Function 171
Rec 2020 172
Studio Swing Levels, Full Range and Legal Video 172
The Code 100 Problem 173
Hypergamma/Cinegamma/Film Rec 173
Sony Hypergamma terminology 174
Gamma in RAW Video 174
The Inefciency of Linear 174
Log Encoding 175
Brief History of Log 176
Superwhite 176
What You See Is Not What You Get 177
Log and RAW—Two Diferent Things 178
Proprietary Log Curves 179
Sony S-Log 179
Arri Log C 182
x..cinematography:.theory.and.practice.
Canon-Log 184
Panalog 184
RedCode 184
Red Log 184
18% Gray In Log 185
Variation in Log Curves 186
10 COLOR 189
Color Terminology 190
Color Temperature: The Balances 190
Warm and Cool 192
White Balance, Black Balance, and Black Shading 193
Magenta vs Green 194
The CIE Diagram 195
Gamut 196
Video Color Spaces 196
Rec 709 and Rec 2020 197
DCI P3 197
AMPAS ACES Color Space 197
The Matrix 198
Color Balance with Gels and Filters 200
Conversion Gels 200
Light Balancing Gels 201
Color Correction Gels 202
Correcting Of-Color Lights 204
HMI 204
Industrial Lamps 204
Color as a Storytelling Tool 204
Film Color Palettes 205
11 IMAGE CONTROL 219
Getting the Look You Want 220
At the DIT Cart 220
What Happens At the Cart 220
Color Correction and Color Grading 222
Lift/Shadows 224
Gamma/Midtones 225
Gain/Highlights 225
Curves 225
Log Controls 226
Log Ofset Color and Master Controls 227
Exporting and Reusing Grades 229
LUTs And Looks 230
1D LUTS 230
3D LUTS 230
LUTs and Looks: What’s the Diference? 232
Controlling the Image in Front of the Lens 232
Difusion and Efects Filters 233
Camera Lens Filters for Color Correction 238
Warming and Cooling Filters 238
Contrast Control in Black-And-White 239
Polarizers 241
IR Filters 241
12 LIGHTING SOURCES 243
The Tools of Lighting 244
Color Balance 244
cinematography:.theory.and.practice..xi.
Color Rendering Index 244
Daylight/Tungsten Sources 245
LED Lights 245
Remote Phosphor LEDs 245
HMI Units 246
Xenons 251
Tungsten Lights 251
Fresnels 252
Open Face 253
PARs 253
Soft Lights 255
Barger Baglights 255
Color-Correct Fluorescents 255
Other Types of Units 256
SoftSun 256
Cycs, Strips, Nooks, and Broads 256
Chinese Lanterns and Spacelights 257
Self-Contained Crane Rigs 258
Ellipsoidal Refector Spots 258
Balloon Lights 258
Handheld Units 259
Equipment for Day Exteriors 259
Improvising Lights 260
Box Lights 260
Christmas Tree Lights 260
Projector Bulbs 261
For More Information On Lighting 262
13 LIGHTING 263
The Fundamentals of Lighting 264
The [conceptual] Tools of Lighting 264
The Attributes of Light 264
Hard vs Soft 264
What are the Goals of Good Lighting? 265
Full Range of Tones 265
Color Control and Color Balance 266
Shape 266
Separation 266
Depth 267
Texture 267
Mood and Tone 267
Shadows 268
Reveal and Conceal 268
Exposure and Lighting 268
Some Lighting Terminology 269
Working with Hard Light and Soft Light 270
Hard Light 271
Soft Light 272
Direction 274
Avoiding Flat Front Lighting 274
Light from the Upstage Side 274
Backlight and Kicker 274
Intensity 275
Texture in Lighting 275
Color 275
xii..cinematography:.theory.and.practice.
Lighting Techniques 276
Ambient 276
Classical Lighting 277
Bringing it through the windows 277
Practicals and Motivated Lighting 277
Basic Principles of Lighting 278
Light From the Upstage Side—Reverse Key 278
Back Cross Keys 278
Ambient Plus Accents 279
Lighting with Practicals 280
Lighting through the Window 282
Available Natural Light 282
Available Light Windows 284
Motivated Light 286
Carrying a Lamp 286
Day Exteriors 287
Fill 287
Silks and Difusion 288
Open Shade and Garage Door Light 288
Sun As Backlight 289
Magic Hour 289
The Same Scene Lit Seven Diferent Ways 290
14 CONTROLLING LIGHT 299
Hard Light and Soft Light 300
Scrims and Barndoors 301
Flags, Solids, and Nets 301
Chimeras and Snoots 302
Softboxes 302
Eggcrates 302
The Window Problem 303
Cookies, Celos, and Gobos 304
Dimmers 305
LED Dimmers 309
Hand Squeezers 310
15 GRIPOLOGY 319
Defnitions 320
Refectors 321
Flags and Cutters 323
Nets 324
Cuculoris (Cookies) 325
Grids and Eggcrates 326
Open Frames 326
Difusers 327
Butterfies and Overheads 327
Holding 328
Grip Heads and C-Stands 330
Highboys 331
Clamps 331
Wall Plates, Baby Plates, and Pigeons 335
2K Receivers and Turtles 336
Side Arms and Ofset Arms 339
Other Grip Gear 339
Sandbags 340
Apple boxes 340
cinematography:.theory.and.practice..xiii.
Wedges 340
Candle Sticks 340
Studded Chain Vise Grips 342
16 CAMERA MOVEMENT 343
Camera Movement in Filmmaking 344
Camera Operating 345
Motivated vs Unmotivated Movement 345
Basic Technique 346
Types Of Moves 347
Pan 347
Tilt 347
Move In / Move Out 347
Zoom 348
Punch-in 349
Moving Shots 349
Tracking 349
Countermove 349
Reveal with Movement 349
Circle Track Moves 350
Crane Moves 350
Rolling Shot 350
Camera Supports for Movement 351
Drones 351
Handheld 351
Stabilizer Rigs 352
Camera Heads 353
The Tripod 355
High-Hat 356
Rocker Plate 356
Tilt Plate 356
The Crab Dolly 356
Dolly Terminology 356
Car Shots 360
Aerial Shots 362
Other Types of Camera Mounts 362
Steadicam 362
Cable-Cam 363
Splash Boxes 363
Underwater Housings 363
Motion Control 364
17 OPTICS & FOCUS 365
The Physical Basis Of Optics 366
Refraction 366
Focal Length and Angle of View 366
F/Stop 367
Focus 367
Mental Focus 368
Circle of Confusion 370
Depth-of-feld 371
How Not to Get More Depth-of-Field 371
Hyperfocal Distance 372
Nodal Points 374
The Rear Nodal Point and Special Efects Shots 376
Zooms and Depth-of-Field 376
xiv..cinematography:.theory.and.practice.
Macrophotography 377
Exposure Compensation in Macrophotography 377
Depth-of-Field in Close-Up Work 377
Calculating Depth-of-Field in Close-Up Work 377
Close-Up Tools 377
Diopters 378
Extension Tubes or Bellows 378
Macro Lenses 379
Snorkels and Innovision 379
Specialized Lenses 379
Lens Extenders and Filter Factors 379
Lens Care 380
Back Focus 380
18 SET OPERATIONS 381
Making It Happen 382
The Director of Photography 382
The Cinematographer’s Tools 384
Gafer Glass 385
Laser Pointer 385
Director’s Viewfnder 385
Digital Still Camera 385
The Shot List 385
Putting the Order Together 386
Reading the Script 386
Talking to the Director 386
Location Scouts and Tech Scouts 387
Coordinating with Other Departments 387
The Team and The Order 389
The Page Turn 390
Tests 390
Camera Crew 390
Operator 391
First AC Duties 392
Second AC 393
Loader 394
DIT 395
Camera Crew Reports, Equipment & Tools 395
Camera Reports 395
Camera Assistant Tools and Supplies 396
AC Prep 398
Camera Prep Checklist 399
The Team 400
Lighting Technicians (Electricians or Sparks) 400
Grips 400
Other Units 401
Set Procedures 403
Block, Light, Rehearse, Shoot 403
The Process 404
Room Tone 406
Set Etiquette 406
Set Safety 408
Lighting, Electrical, and Grip 409
Crane Safety 409
Slating Technique 410
cinematography:.theory.and.practice..xv.
Verbal Slating 410
Tail Slate 411
MOS Slating 411
Slating Multiple Cameras 411
Timecode Slates 412
Jamming the Slate 412
What to Write on the Slate 415
When to Change the Letter 415
The European System of Slating 416
Pickups, Series, and Reshoots 416
VFX 417
Insert Slates 418
Finding the Sun 418
19 DIT & WORKFLOW 419
Data Management 420
Basic Principles 420
Cover your Rear 420
Standard Procedures 421
Maintain Your Logs 421
Procedure—Best Practices 421
Locked and Loaded 422
Get Your Signals Straight 422
Always Scrub 423
Three Drives 423
Do Not Drag and Drop 424
Logs 424
File Management 425
File Naming 425
Download/Ingest Software 425
ShotPut Pro 425
Silverstack 425
Proprietary Data Management Software 426
External Recorders 426
Ten Numbers: ASC-CDL 426
Goodbye, Knobs 427
Primary Correction Only 428
SOP and S 428
ACES: What It Is, What It Does 430
AMPAS and ACES 430
The Stages 431
ACES Terminology 432
20 POWER & DISTRO 433
Measurement of Electricity 434
Potential 434
Paper Amps 435
Electrical Supply Systems 435
Single-phase 435
Three-phase 435
Power Sources 436
Stage Service 436
Generators 436
Large Generator Operation 437
Guidelines for Running Small Generators 437
Paralleling Small Generators 439
xvi..cinematography:.theory.and.practice.
Tie-ins 439
Tie-in Safety 439
Determining KVA 441
Wall Plugging 442
Load Calculations and Paper Amps 442
Ampacity 443
Color Coding 444
The Neutral 444
Distribution Equipment 444
Tie-in Clamps 444
Busbar Lugs 444
Connectors 444
Bull Switches 445
Feeder Cable 445
Distribution Boxes 445
Lunch Boxes, Snake Bites, and Gangboxes 445
Extensions (Stingers) 446
Zip Extensions 446
Planning a Distribution System 446
Balancing the Load 446
Calculating Voltage Drop 449
Electrical Safety 450
Wet Work 451
HMI Safety 451
Grounding Safety 451
GFCI 453
21 TECHNICAL ISSUES 455
Shooting Greenscreen/Bluescreen 456
Lighting for Greenscreen/Bluescreen 456
Dimmers 458
Dimming LEDs 460
Dimmer Boards 461
Working with Strobes 461
High-Speed Photography 463
Lighting For Extreme Close-Up 464
Efects 464
Smoke 464
Fire 464
TV and Projector Efects 465
Day-for-Night 466
Moonlight Efect 467
Water EFX 467
Rain 467
Lightning 468
Gunshots and Explosions 469
Time-Lapse Photography 469
Time Slicing 470
Transferring Film To Video 471
Flicker 471
Shooting Virtual Reality 475
Dealing With Audio 477
Double System vs Single System Sound 477
Lock It Up! 477
Microphones 478
cinematography:.theory.and.practice..xvii.
Mic on the Camera? No! 478
Directional Mics 478
Shotgun Mics 478
Mic Mounts 479
Wireless Lavs 479
Phantom Power 479
Mic vs Line Input 479
Audio Basics 479
Rule #1 479
Rule #2 479
Scratch Track 480
Wild Track 480
Foley 480
Room Tone 480
ADR Is Expensive 480
Shooting To Playback 480
Timecode Slate 481
BACKMATTER 483
Dedication 483
About the Author 483
Acknowledgments 483
About the Website 483
BIBLIOGRAPHY 484
INDEX 486
xviii..cinematography:.theory.and.practice.
01
writing with motion
Figure 1.1. Cinematography is com- WRITING WITH MOTION
posed of many diferent techniques
In this frame from Blade Runner 2049, Filmmaking is about telling stories visually. The cinematography of a flm
lighting from the upstage side, shallow, is central to this. The word cinematography is from the Greek roots kìnema
selective focus, and warm tone light “movement” and, gràphein “to write.” Cinematography is more than just
create a powerful frame that is more
than just the sum of it’s parts photography—more than just recording what is in front of the camera; it
is the process of taking ideas, words, actions, emotional subtext, tone, and
all other forms of nonverbal communication and rendering them in visual
terms. Cinematic technique is the entire range of methods and crafts that we
use to add layers of meaning and subtext to the “content” of the flm—the
actors, sets, dialog, and action. Figure 1.1 illustrates this—it could have
been just a shot of a guy holding a skull, but a skillful combination of
lighting, focus, depth-of-feld, and composition makes it so much more
than that.
BUILDING A WORLD
A flm just creates a visual world for the characters to inhabit. This world
is an important part of how the audience will perceive the story; how they
will understand the characters and their motivations.
Think of great flms like Blade Runner, Casablanca, Fight Club, O Brother,
Where Art Thou?, or The Grand Budapest Hotel. They all have a defnite,
Cinematography is writing with identifable universe in which they exist: it consists of the locations, the
images in movement sets, the wardrobe, even the sounds, but to a large extent these visual
Robert Bresson worlds are created through the cinematography. All these elements work
(The Trial of Joan of Arc) together, of course—everything in visual storytelling is interrelated: the
sets might be fantastic, but if the lighting is terrible, then the end result
will be substandard.
Let’s look at this shot from early in High Noon (Figures 1.4). Gary Cooper,
the sherif of the town, has been abandoned by the frightened citizens as
the outlaw kingpin is set to return on the noon train. He is alone. The
shot starts tight on his face, then the camera pulls back and up to show just
how alone he is. It is a powerful, graphic representation of his vulnerable
situation.
THE VISUAL LANGUAGE OF CINEMATOGRAPHY
What we’re talking about here is not the physical tools of flmmaking:
the camera, dolly, the lights, cranes, and camera mounts, we will get to
those later. We are talking about the conceptual tools of the trade. So what
are the conceptual tools of visual storytelling that we employ? There are
many, but we can roughly classify them into some general categories.
2..cinematography:.theory.and.practice.
• The frame. Figure 1.2. (top) Sometimes the sim-
plest infection can change meaning
• Light and color. of the shot entirely In this scene from
Dunkirk, it’s just a shot of men waiting
• The lens. on the mole
• Focus. Figure 1.3. (above) When the main
• Perspective. character turns his head to look up,
it becomes about the struggle of a
• Movement. single individual to survive, and about
the terror of being unable to run when
• Texture. under air attack
• Information.
• POV.
• Visual Metaphor.
writing.with.motion..3.
Figure 1.4. This shot from High Noon
uses a big but simple camera move to
tell the story—the sherif is alone, no
one is going to help him fght of the
outlaws who are coming to the town It
starts tight on his face, then the camera
pulls back and cranes up to show how
isolated he is He turns and walks up
the street to his fate The camera move
is eloquent and powerful—it tells the
entire story
PERSPECTIVE
Another aspect of space in flmmaking is perspective. It can be important
in establishing a sense of depth in the frame, as in this frame from The
Shining (Figure 1.12). Kubrick uses strong perspective throughout the flm
to intensify of the sense of menace. Focusing the viewer on a particular
point makes us wonder if something will happen.
6..cinematography:.theory.and.practice.
LIGHT AND COLOR Figure 1.7. (top) An extreme wide
angle lens distorts the image and con-
Light and color are some of the most powerful and fexible tools in the veys the idea of an insane world in The
cinematographer’s arsenal. Lighting and controlling color are what takes City of Lost Children
up most of the director of photography’s time on most sets and for good Figure 1.8. (above) An extreme long
reason. They also have a special power that is shared only by a very few art focal length lens enhances the desert
forms such as music and dance: they have the ability to reach people at a mirage efect and the emptiness of the
space in this famous shot from Law-
gut, emotional level. Figure 1.9 is from Black Rain—the garish night light- rence of Arabia The lens was made spe-
ing of an Osaka refected in the wet pavement underscore how isolated cially for the flm by Panavision
the characters are in an environment that is foreign to them. Figure 1.11
is an especially powerful use of lighting, chiaroscuro lighting and the deep
shadows reveal the madness of Col. Kurtz.
MOVEMENT
Movement is a powerful tool of flmmaking; in fact, movies are one of
the few art forms that employ motion and time, dance obviously being
another one. This opening sequence from the Orson Welles masterpiece
Touch of Evil (Figure 1.15) is an excellent example of motion that serves an
important storytelling purpose. The 3-1/2 minute shot is often cited as a
writing.with.motion..7.
Figure 1.9. (top) The garish night lights bravura camera movement, but this misses the point. In a single extended
of Osaka set the mood for this scene
from Black Rain shot, Welles shows us a bomb being planted, introduces the main charac-
ters, sets up their situation, where they are, and what they are doing. At
Figure 1.10. (above) A high camera the end, the bomb explodes, establishing the sense of danger and jeopardy
angle shows the emptiness of the land-
scape at the infamous crossroads, in O in their situation.
Brother, Where Art Thou?
CAMERA ANGLE
Camera angle is where the camera is placed in relation to the scene: it can
be high angle (above), low angle (below, looking up), eye-level (the most
frequently used), and a wide variety of others. It is a key ingredient in
Visual storytelling in flm is the art composition, but also very much afects our emotional reaction to a shot,
of conveying a narrative journey
with the images that are possible when you want a character to seem powerful, vulnerable, or intimate,
because of the amazing technol- camera angles are a vital tool. This shot from O Brother, Where Art Thou?,
ogy of this art form (Figure 1.10) uses a high angle to show us the famous crossroads where
Ken Aguado Robert Johnson sold his soul in exchange for being a guitar genius. The
emptiness of the landscape sets the mood for the scene.
INFORMATION
The camera can reveal or conceal information; think of it as a visual equiv-
alent of exposition, which in verbal storytelling means conveying impor-
tant information or background to the audience. It is really at the heart of
telling a story visually—letting the camera show us information is usually
8..cinematography:.theory.and.practice.
a more cinematic way of getting information across to the audience than is Figure 1.11. (top) The chiaroscuro (light
and dark) and heavy shadows of this
dialog or a voice-over narrator. In this frame from Suspicion, Cary Grant is shot from Apocalypse Now powerfully
seemingly going to poison his wife. As he climbs the stairs, Hitchcock wants underscore the madness of Col Kurtz
to draw attention to the possible poisoned glass of milk—he had the crew Figure 1.12. (above) Kubrick uses one
place a small bulb inside the glass to make it glow. This telling detail greatly point perspective throughout The Shin-
ing to increase the sense of supernatu-
enhances the suspense of the scene. Establishing information can be done ral menace
with a choice of the frame, or a camera move and the lens, but it can also
be done with lighting that conceals or reveals certain details of the scene.
writing.with.motion..9.
Figure 1.13. (top) Polanski uses over- POINT-OF-VIEW
the-shoulder shots to employ detective
POV in Chinatown Point-of-view (POV) is a key tool of visual storytelling. We use the term
in many diferent ways on a flm set, but the most often used meaning is to
Figure 1.14. (above) A simple detail
makes this shot from In Cold Blood have the camera see something in much the same way as one of the char-
a masterpiece of visual storytelling acters would see it: to view the scene from that character’s point-of-view.
About to be executed, he tells a story
about his life—the spray dripping on The camera is the “eye” of the audience. To a great extent, cinematogra-
the window makes it look like the rain is phy consists of showing the audience what we want them to know about
crying for him
the story; POV shots tend to make the audience more involved in the story
for the simple reason that what they see and what the character sees are
momentarily the same thing—in a sense, the audience inhabits the char-
acter’s brain and experiences the world as that character is experiencing it.
DETECTIVE POV
Chinatown employs another layer of POV as well—called detective POV.
It simply means that the audience does not know something until the
detective knows it—we only discover clues when he discovers them. This
means that the viewer is even more involved in how the main character
is experiencing the events of the story. Polanski is a master of this story
technique, and he makes it truly visual. For example, in Chinatown, any
time Jake Gittes is coming to a new location looking for clues, the opening
shots are over-the-shoulders as in Figure 1.13.
10..cinematography:.theory.and.practice.
VISUAL TEXTURE Figure 1.15. (top) The 3-1/2 minute
opening shot of Orson Welles Touch
These days, we rarely shoot anything “straight”—meaning a scene where of Evil is famous as a bravura camera
we merely record reality and attempt to reproduce it exactly as it appears move, but what is important about it
is that it sets up the entire story, the
in life. In most cases—particularly in feature flms, commercials, and cer- characters, the situation, and even the
tainly in music videos—we manipulate the image in some way, we add threat of violence It is far more than just
a “cool camera move ”
some visual texture to it; this is not to be confused with the surface texture
of objects. There are many devices we use to accomplish this: changing Figure 1.16. (above) To emphasize the
possibly poisoned glass of milk, Hitch-
the color and contrast of the picture, desaturating the color of the image, cock had the crew place a small light
flters, fog and smoke efects, rain, using unusual flm stocks, various print- bulb inside, which increases the sense
of menace as he climbs the stairs in Sus-
ing techniques, and of course, the whole range of image manipulation that picion
can be accomplished with digital images on the computer—the list goes
writing.with.motion..11.
Figure 1.17. The story in Memento pro-
ceeds in two streams Going forward in
time, it is told in black-and-white Going
backwards, it is in color At the moment
the two time lines converge, Leonard
takes a Polaroid As it develops, it turns
from black-and-white to color We frst
see him in monochrome, but when
we cut back to him, he is in color—an
elegant transition device, and perfect
visual metaphor
12..cinematography:.theory.and.practice.
on and on. Some of these image manipulations are done with the camera, Figure 1.18. (top) Texture in the light-
some are done with lighting (Figure 1.18), some are mechanical efx, and ing adds an extra dimension to this shot
from Gladiator
some are done in postproduction. Even though today’s flms tend to be
shot in a more naturalist style, the actual images are nearly always manipu- Figure 1.19. (above) Atmospheric per-
spective and texture in City of Lost Chil-
lated in some way. As a general rule, the exception to this has always been dren
comedy. Even in terms of framing, comedy tends to be flmed in the wide
shot, with relatively few close-ups.
A particularly dramatic example is the flm City of Lost Children (Figure
1.19). Although it looks like a street by the wharf, it is a studio set—the
buildings in the background are just wood cutouts. The flmmakers use
a heavy fog efect to sell the illusion and create atmospheric perspective.
First observed by Leonardo da Vinci—the further away objects in the
landscape are, the less distinct they appear. Although not shown here, in
exterior landscapes distant objects also tend to be bluer.
writing.with.motion..13.
Figure 1.20. (top) In a climactic scene VISUAL METAPHOR
from Christopher Nolan’s Dunkirk, Far-
rier has set his Spitfre burning to pre- One of our most important tools as flmmakers is visual metaphor, which
vent capture As he watches it burn, it is is the ability of images to convey a meaning in addition to their straight-
a visual metaphor for the defeat Britain
has just sufered forward reality. An example—in Memento, the extended fashback (which
moves forward in time) is shown in black-and-white and the present
Figure 1.21. (above) His seemingly (which moves backward in time) is told in color. Essentially, it is two parts
blank stare tells us volumes about the
resolution of the Brits to never surren- of the same story with one part moving forward, and the other part told
der backward. At the point in time where they intersect, the black-and-white
slowly changes to color. Director Christopher Nolan accomplishes this
in a subtle, and elegant way by showing a Polaroid photo develop (Figure
1.17). At the precise moment when these two time lines intersect, we
watch as the Polaroid turns from no color to full color—a simple, elegant
and expressive visual metaphor. Figures 1.20 and 1.21 are an eloquent
example of visual metaphor in a diferent Nolan flm—Dunkirk.
14..cinematography:.theory.and.practice.
NATURALISM VS STYLIZED Figure 1.22. The Cabinet of Dr. Caligari
is one of the most highly stylized flms
A flm’s visual style may lie anywhere on a spectrum from highly stylized ever made Modern flms have become
to very naturalistic. The Cabinet of Dr. Caligari has a very stylized look progressively more naturalistic, partly
(Figure 1.22). Films from the Technicolor era, such as The Wizard of Oz or because new cameras and lighting
equipment have made it easier to shoot
Singing In the Rain, are somewhere in the middle, the exaggerated color and realistically
over-lit “studio look” are not natural. Films since the 1960’s have tended
more and more toward naturalism. This is aided by modern cameras that
can shoot in low light conditions and new ways of moving the camera that
make it possible to shoot a flm to look almost as if you are actually there,
observing the action in person.
Director of photography M. David Mullen writes: “With modern
tools, it is more possible than ever to shoot movies in available light. This Most of the flms I have shot have
approach, used appropriately, can enhance the drama of a scene or an been based in reality, so it follows
that much of what I do is founded
entire movie. But it sometimes can be used as a crutch by some flmmakers in a naturalistic approach
to avoid actually doing the hard work of making the movie: taking the
Roger Deakins
time to think about the appropriate use of light and shadow to tell this (1917, The Big Lebowski,
particular story, and then executing that creative idea. This desire by some Skyfall, Barton Fink)
to avoid thinking about controlling or creating light even extends into
other issues like composition; they fall into the trap of seeing the camera
merely as a passive recording tool that follows whatever action occurs in
front of it.”
writing.with.motion..15.
Figure 1.23 Filmmaking is about PUTTING IT ALL TOGETHER
making dreams come alive, even if they Filmmaking is a strange and mysterious enterprise—it involves mixing
are nightmares
and coordinating many diferent elements, some of them artistic, some of
them technical. In particular, the cinematographer must be able to bridge
that gap—to understand the practical side of dealing with the camera,
lenses, lighting, fle types, workfow, and so on, but also have their minds
frmly planted in the artistic side of creating a visual world, visual meta-
phor, and storytelling. There is a third aspect as well: being an amateur
psychologist. On a flm set, there is no more fundamental collaboration
than that of the cinematographer and the director.
Many directors are adept at conveying their vision of the project either
verbally or with drawings, metaphors, or photographic references. Some
directors are not as good at this—they have a visual idea, but they are
not able to communicate it well to their collaborators. In other cases, the
The photography is a very large director does not have a particular visual concept and wants help in devel-
contribution It just can’t seem like oping one. In these instances, it is really up to the cinematographer to
a large contribution reach into the director’s head and try to understand what it is she or he
Gordon WIllis is trying to accomplish; if there are missing pieces in the visual puzzle
(The Godfather, that is a flm project, then it is up to the DP to fll in those blank spots
Annie Hall) with artistic inspiration, collaboration, and leadership. Sometimes this
brings into play another role the cinematographer must play—diplomat,
which may call for a great deal of delicacy and being careful about how
one phrases a suggestion. In any case, it is up to the cinematographer to
make the flm’s vision come alive. We are in the business of making things
happen—taking artistic ideas and implementing them in the real world of
the flm set. Our job is to make dreams come alive; it is a challenging and
satisfying undertaking.
16..cinematography:.theory.and.practice.
02
the frame
Figure 2.1. This shot from the fnale of MORE THAN JUST A PICTURE
the flm noir classic The Big Combo is not
only graphically strong in composition, Let’s think of the frame as more than just a picture—it is information.
but the many visual elements all work Clearly some parts of the information are more important than others,
together to reinforce and add subtext and we want it organized in a particular way to be perceived by the viewer
to the story content of the scene
in a certain order. Despite how it seems, we do not perceive an image all at
once, which is why the order of perception is important.
Composition (and lighting, which can be part of composition) is how
this is accomplished. Through composition we are telling the audience
where to look, what to look at and in what order to look at it. The frame
is fundamentally two-dimensional design (3-D flms notwithstanding).
Two-dimensional design is about guiding the eye and directing the atten-
tion of the viewer in an organized manner that conveys the meaning that
you wish to impart. It is how we impose a point-of-view on the material that
may be diferent from how others see it.
If all we did was simply photograph what is there in exactly the same
way everyone else sees it, the job could be done by a robot camera; there
Cinema is a matter of what’s in the would be no need for the cinematographer or editor. An image should
frame and what’s out convey meaning, mode, tone, atmosphere, and subtext on its own—with-
Martin Scorsese out regard to voice-over, dialog, audio, or other explanation. This was in
(Goodfellas, The Aviator, its purest essence in silent flm, but the principle still applies: the images
Casino, Raging Bull) must stand on their own.
Good composition reinforces the way in which the mind organizes
information. In some cases, it may deliberately run counter to how the
eye/brain combination works in order to add a new layer of meaning or
ironic comment. Composition selects and emphasizes elements such as
size, shape, order, dominance, hierarchy, pattern, resonance, and discor-
dance in ways that give meaning to the things being photographed that
go beyond the simple: “here they are.” We will start with the very basic
rules of visual organization then move on to more sophisticated concepts
18..cinematography:.theory.and.practice.
of design and visual language. The principles of design and visual com- Figure 2.2. (top) The rhythm of repeated
munication are a vast subject; here we will just touch on the basics in order elements is an important component of
this shot from The Conformist.
to lay the foundation for discussion.
Figure 2.3. (above) The symmetrical
PRINCIPLES OF COMPOSITION balance created by the shadows and
light on the foor is in visual tension with
Certain basic principles pertain to all types of visual design, whether in the of-center fgure in this frame from
flm, photography, painting, or drawing. These techniques of basic design The Man Who Wasn’t There
work interactively in various combinations to add depth, movement, and
visual force to the elements of the frame. We can think of them as guide-
lines for visual organization.
• Unity
• Balance
• Perspective
• Rhythm
• Proportion
• Contrast
• Texture
• Directionality
the.frame..19.
Figure 2.4. Use of background/fore- UNITY
ground/midground, choice of lens, and
camera position combine to give this Unity is the principle that the visual organization of an image is self-con-
Roger Deakins shot depth and three- tained, and complete. This is true even if it is a deliberately chaotic or
dimensionality in Blade Runner 2049 unorganized composition. In Figure 2.1, this climactic fnal shot from The
Big Combo uses frame-within-a-frame composition to tell the story visually:
having defeated the bad guys, the hero and femme fatale emerge from the
darkness into the light of morning.
BALANCE
Visual balance (or lack of balance) is an important part of composition.
Every element in a visual composition has a visual weight. These may be
organized into a balanced or unbalanced composition. The visual weight
of an object is primarily determined by its size but is also afected by its
position in the frame, its color, movement, and the subject matter itself.
RHYTHM
Rhythm of repetitive or similar elements can create patterns of organiza-
tion. Rhythm plays a key role in the visual feld, sometimes in a very subtle
way as in Figures 2.2 and 2.3, frames from The Conformist and The Man
Who Wasn’t There.
PROPORTION
Classical Greek philosophy expressed the idea that mathematics was the
controlling force of the universe and that it was expressed in visual forces
as the Golden Mean. The Golden Mean is just one way of looking at pro-
portion and size relationships in general. Figure 2.5 shows the Golden
Mean as applied to The Great Wave Of Kanagawa by Japanese ukiyo-e artist
Hokusai. The outer rectangle defned by the Golden Mean is 1.62; very
close to 1.78:1, a widely used HD standard frame proportion.
CONTRAST
We know a thing by its opposite. Contrast is a function of the light/dark
value, the color and texture of the objects in the frame, and the lighting.
It is an important component in defning depth, spatial relationships, and,
of course, carries considerable emotional and storytelling weight as well.
TEXTURE
Based on our associations with physical objects and cultural factors, texture
gives perceptual clues. Texture can be a function of the objects themselves,
but usually requires lighting to bring it out, as in Figure 2.13. We also
add texture in many diferent ways in flmmaking; see the chapter Lighting
where we will discuss adding visual texture to lighting as a way of shaping
the light.
20..cinematography:.theory.and.practice.
DIRECTIONALITY Figure 2.5. The Golden Mean as shown
One of the most fundamental of visual principles is directionality. With a in The Great Wave Of Kanagawa by
Hokusai
few exceptions, everything has some element of directionality. This direc-
tionality is a key element of its visual weight, which determines how it
will act in a visual feld and how it will afect other elements. Anything
that is not symmetrical is directional.
THE THREE-DIMENSIONAL FIELD
In any form of photography, we are taking a three-dimensional world and
projecting it onto a two-dimensional frame (although this is less true of If a shot lasts five seconds, the audi-
ence must see clearly in that five
3-D flmmaking). Part of our work in shooting visual stories is this essen- seconds what the picture is about
tial idea of creating a three-dimensional world out of two-dimensional You have to see the actors right
images. It calls into play a vast array of techniques and methods: the lens, away, but at the same time you
blocking of actors, lighting, and camera movement all come into play. have to light the mood Composi-
tion is really important in creating
There are, of course, times when we wish to make the frame more two- the mood: it also helps to sort out
dimensional, even replicating the fat space of an animated cartoon, for what you want the eye to see and
example; in that case, the same visual design principles apply, they are just in what order
used in a diferent fashion to create that visual efect. Many visual forces Vilmos Zsigmond
contribute to the illusion of depth and dimension. For the most part, they (The Deer Hunter,
relate to how the human eye/brain combination perceives space, but some Close Encounters of the
of them are cultural and historical as well—as flm viewers, we all have a Third Kind )
long history of visual education from everything we have seen before.
CREATING DEPTH
In working toward establishing this sense of depth and three-dimensional-
ity, there are a number of ways to create the illusion—Figure 2.8 is a deep
focus shot from Touch of Evil; it shows a great sense of depth in a visual
feld, also in Figures 2.4 and 2.7 from the two versions of Seven Samurai.
In terms of the editing, it is useful to view a scene from more than one
angle—shooting a scene entirely from a single angle creates what we call
fat space. Elements that create a sense of visual depth include:
the.frame..21.
Figure 2.6. We see the riders in The
Magnifcent Seven A rather ordinary
composition
Figure 2.7. The same story is told in
Seven Samurai, but Kurosawa achieves
a far more dynamic and meaningful
arrangement of the actors
• Overlap
• Size change
• Linear perspective
• Foreshortening
• Chiaroscuro
• Atmospheric perspective
RELATIVE SIZE
Although the eye can be fooled, the relative size of an object is an impor-
tant visual clue to depth. Relative size is a component of many optical
illusions and a key compositional element in manipulating the viewer’s
perception of the subject; it can be used to focus the viewer’s attention on
important elements. There are many ways to manipulate relative size in
the frame, using position or diferent lenses.
LINEAR PERSPECTIVE
Linear perspective was an invention of the Renaissance artist Brunelleschi.
In flm and video photography, it is not necessary to know the rules of
perspective, but it is important to recognize its importance in visual orga-
nization. Director Stanley Kubrick uses strong geometry in Dr. Strangelove
(Figure 2.20) for similar storytelling purposes.
ATMOSPHERIC PERSPECTIVE
Atmospheric perspective (sometimes called aerial perspective) is something
of a special case as it is an entirely “real world” phenomenon. The term
was coined by Leonardo da Vinci, who used it in his paintings. Objects that
are a great distance away will have less detail, less saturated colors, and gen-
erally be less defned than those that are closer. This is a result of the image
22..cinematography:.theory.and.practice.
being fltered through more atmosphere, and the haze flters out some of Figure 2.8. Strong perspective, deep
the warmer wavelengths, leaving more of the shorter, bluer wavelengths. focus and a dynamic space are achieved
with a wide angle lens and camera
It can be re-created on set with haze efects or scrims. placement in Orson Welles’ Touch of Evil
the.frame..25.
Figure 2.14. (top) Line as form and is shown in this frame from Drive (Figure 2.16). This power of the frame
movement in this frame from Black Pan-
ther. itself is also important in our choice of aspect ratio—which is the shape of
the frame. We’ll look at aspect ratios in a moment.
Figure 2.15. (above) Negative space
makes for a compelling composition in
La La Land FRAME WITHIN A FRAME
Sometimes the composition demands a frame that is diferent from the
aspect ratio of the flm. A solution is to use a frame within a frame—which
means using framing elements within the shot. Figure 2.22 is an exam-
The thing about movies is that you ple from Jeunet’s Delicatessen—where a staircase frames the people at the
are telling the audience where to
look When you cut to something bottom of the stairs. It is particularly useful with very wide screen formats.
you’re saying—look at this, this is Frame within a frame can be used not only to alter the aspect ratio of the
important
shot but also to focus attention on important story elements.
Brian DePalma
(Carrie, Body Double, POSITIVE AND NEGATIVE SPACE
Bonfre of the Vanities)
The visual weight of objects or lines of force can create positive space—a
visual force, but their absence can create negative space, as in this frame from
Psycho (Figure 2.21) and La La Land (Figure 2.15). Negative space can make
for strong, compelling visual compositions. Remember that the space of-
screen can be important also, especially if the character looks of-screen, or
even past the camera. Be aware of the force of the spaces outside the frame,
they can sometimes have as much visual weight as objects inside the frame.
26..cinematography:.theory.and.practice.
MOVEMENT IN THE VISUAL FIELD Figure 2.16. (top) The power of the
edge in composition is shown in this
All of these forces work in combination, of course—in ways that interact frame from Drive
to create a sense of movement in the visual feld. These factors combine to
create a visual movement (eye scan) from front to back in a circular fashion Figure 2.17. (above) Compositional tri-
angles in Citizen Kane
as we see in Seven Samurai (Figure 2.24). This movement in the frame is
important not only for the composition but also plays an important role in
what order the viewer perceives and assimilates the subjects in the frame.
This infuences their perception of content. In analyzing frames in this
way, remember that we are talking about the movement of the eye, not
movement of the camera or movement of the actor or object within a shot.
the.frame..27.
Figure 2.18. (top) The Rule of Thirds THE RULE OF THIRDS
illustrated in this frame from Chinatown
The rule of thirds starts by dividing the frame into thirds (Figure 2.18).
Figure 2.19. (above) Strong diagonal The rule of thirds proposes that a useful approximate starting point for any
lines of linear perspective are crucial to compositional grouping is to place major points of interest in the scene on
this shot from Dr. Zhivago.
any of the four intersections of the interior lines. It is a simple but efective
rough principle for any frame composition. The rule of thirds has been
used by artists for centuries; however, as Dr. Venkman says in Ghostbusters,
“It’s really more of a guideline than a rule.”
RULES OF COMPOSITION FOR PEOPLE
If ever there were rules made to be broken, they are the rules of compo-
sition, but it is important to understand them before deviating or using
them in a contrary style.
HEADROOM
Headroom is a key issue in framing people. It is a natural tendency to put a
The edges of the frame are often person’s head in the center of the frame. This results in lots of wasted space
more interesting than the center above the person that serves no purpose and results in poor composition
Luciano Tovoli, (Figure 2.27). Standard practice is to keep the top of the head fairly close
(Suspiria, Bread and Chocolate) to the top of the frame (Figure 2.28) without cutting them of. However,
giving them a haircut is fne for close-ups.
28..cinematography:.theory.and.practice.
NOSEROOM Figure 2.20. Kubrick is a master of
choosing the right angle and lens to tell
A similar issue is noseroom, sometimes called looking room. The natural the story powerfully In this shot, the
instinct is to put the actor in the center of the frame—something you lens height and camera angle make a
have to avoid. Think of it as the actor’s eyeline or gaze as some visual clear statement about the state of mind
in Dr. Strangelove: or How I Learned to
weight—it needs some room. When the performer is looking of to left or Stop Worrying and Love the Bomb
right, move them over to the other side of the frame to make sure there is
appropriate noseroom for the composition (Figures 2.25 and 2.26). If the
performer’s gaze is straight ahead, then centering them in the frame works
fne. Sometimes, however, we may want to use “wrong side framing” and
have the fgure “push the frame,” for storytelling efect.
OTHER GUIDELINES
Cutting them of at the ankles will look awkward; likewise, don’t cut of
their hands at the wrist. Naturally, a character’s hand will often dart in and
out of the frame, but for a long static shot, they should be clearly in or out.
Pay attention to the heads of people standing in the background. When
framing for our important foreground subjects, whether or not to include
the heads of background people is a judgment call. If they are prominent,
it is best to include them. If there is enough emphasis on the foreground
subjects and the background people are strictly incidental or perhaps
largely out of focus, it is OK to cut them of wherever is necessary.
LENS HEIGHT
Lens height can also be an efective tool for adding subtext to a shot. As a
general rule, dialog shots are done at the eye level of the actors involved;
this is the standard default setting but you can vary it to achieve a variety
of subtleties in lens point-of-view. Some flmmakers tend to avoid using
straight-on eye-level shots, as they consider them boring. Variations from
eye level can have story implications, psychological undertones and as a
compositional device, such as in Figures 2.29 and 2.30.
Variations from eye level are not to be done casually, especially with
dialog or reaction shots. Keep in mind that deviations from eye level are
asking the viewer to participate in the scene in a mode that is diferent
from normal, so be sure that there is a good reason for it.
the.frame..29.
Figure 2.21. (top) Negative space andHIGH ANGLE
unbalanced composition in Psycho.
When the camera is above eye height, we seem to dominate the subject.
Figure 2.22. (above) Frame-within-a- The subject is reduced in stature and perhaps in importance. Its impor-
frame in Delicatessen. tance is not, however, diminished if the high angle reveals it to be a mas-
sive, extensive structure, for example. This reminds us that high angles
looking down on the subject reveal overall layout and scope. This is why
they are often used as establishing shots at the beginning of a scene.
This is useful if the intent is an establishing or expository shot where
it is important for the audience to know something about the layout. As
with subjective and objective camera views on the lateral plane, we can see
camera angles that diverge from eye level as increasingly objective, more
third person in terms of our literary analogy. This applies especially to
higher angles. A very high angle is called a god’s eye shot (Figure 2.30), sug-
gesting its omniscient, removed.
30..cinematography:.theory.and.practice.
LOW ANGLE Figure 2.23. (top) Strong directionality
and the geometry of a sweeping curve
A low-angle shot can make a character seem ominous and foreboding, as in in this shot from La La Land.
Dr. Strangelove (Figure 2.29). When a character is approaching something
Figure 2.24. (above) Visual movement
as seen from a low angle, little is revealed beyond what the character might in the frame reinforces character rela-
see himself: we share the character’s surprise or sense of mystery. If the tionships and subtext in this shot from
shots of the character are low angle, we share his apprehension. Seven Samurai.
If these are then combined with high-angle shots that reveal what the
character does not know, for example, we are aware of whatever surprise
or ambush or revelation awaits him: this is the true nature of suspense. As
Hitchcock brilliantly observed, there can be no real suspense unless the
audience knows what is going to happen. His famous example is the bomb
under the table. If two characters sit at a table and suddenly a bomb goes
of, we have a moment of surprise that is quickly over, a cheap shock at
the.frame..31.
Figure 2.25. (top, left) Not enough best. If the audience knows that the bomb is under the table and is aware
noseroom
that the timer is clicking steadily, then there is true suspense. If the audi-
Figure 2.26. (top, right) Proper nose- ence knows the time is getting shorter, then the fact that the characters are
room chatting amiably is both maddening and engaging.
Figure 2.27. (bottom, left) Too much Although any time we get away from human eye level we are decreasing
headroom our subjective identifcation with the characters, low angles can become
Figure 2.28. (bottom, right) About the more subjective in other ways. Clearly a very low angle can be a dog’s eye
right amount of headroom view, especially if it is cut in right after a shot of the dog and then the very
low angle moves somewhat erratically and in the manner of a dog. This
type of doggie POV is practically required for werewolf movies, of course.
With low angles, the subject tends to dominate us. If the subject is a char-
acter, that actor will seem more powerful and dominant. Any time the
actor being viewed is meant to be menacing or frightening to the character
we are associating the POV with, a low angle is often appropriate.
DUTCH TILT
In most shooting we strive for the camera to be perfectly level. It is the
job of the camera assistant to recheck every time the camera is moved and
ensure that it is still “on the bubble.” This refers to the bulls-eye bubble
levels that are standard on all camera mounts, heads, and dollies.
Human perception is much more sensitive to of-level verticals than to
of-level horizontals. If the camera is even a little of, walls, doorways,
telephone poles, will be immediately seen as out of plumb. There are
instances, however, where we want the visual tension of this of-level con-
dition to work for us to create anxiety, paranoia, subjugation, or mystery.
The term for this is “Dutch tilt” or “Dutch angle.” See Figure 2.31.
32..cinematography:.theory.and.practice.
Figure 2.29. A low angle underlines the ASPECT RATIOS
madness of General Ripper in Dr. Stran-
gelove Before shooting starts, we have to decide on the aspect ratio of—the shape of
the framing of the entire movie as it will be seen on a screen or a monitor.
Figure 2.30. A God’s Eye shot illustrates
the surrealistic nature of the world in In the early days of flm the aspect ratio was decided by the manufacturers
Terry Gilliam’s Brazil of the cameras, projectors, even the processing and editing equipment. At
frst, flms were 1.33:1—1.33 units wide for every 1 unit of height (Figure
2.32). This was a product of the width of the flm and the placement of the
the.frame..33.
Figure 2.31. Dutch Tilt is used extremely sprocket holes. Although it may be apocryphal, the story goes that George
well in the mystery/suspense flm The Eastman of Kodak asked Thomas Edison, (whose associate William Dick-
Third Man, where a great many shots are
done with the camera of-level son invented the flm camera) how wide he wanted the rolls of celluoid
flm to be. Edison held his fngers about 1 and 1/3 inches apart (35mm) and
said “About this wide.” Eventually the Academy of Motion Picture Arts
and Sciences defned 1.37:1 as Academy Aperture.
With the advent of television, flm producers wanted theaters to show
the public what they couldn’t get at home, which inspired them to make
the frame wider and wider, from 1.66:1 to even wider aspect ratios, with
1.85:1 becoming a widely used frame shape for some time. 1.78 was also
popular, which led the developers of high-def televisions to standardize
on 16:9, which is the shape of virtually any TV or monitor you can buy
today. Although most movies (and certainly all commercials) are shot
in this framing, flmmakers have used wider frames, such as 2:1 or even
2.35:1. See Figure 2.32 for a variety of aspect ratios.
34..cinematography:.theory.and.practice.
1 33:1 The Cabinet of Dr. Caligari
1 66:1 Dr. No
the.frame..35.
1 78:1 (16x9) The Dark Knight Rises
1 85:1 Joker
36..cinematography:.theory.and.practice.
03
language of the lens
Figure 3.1. (top) An extremely long THE LENS IN STORYTELLING
lens compresses space and brings the
sun dramatically into this image from The language of the lens encompasses how the lens mediates and interprets
Empire of the Sun, clearly a visual meta- the physical world for us; how it “renders” the image in diferent ways that
phor for the Japanese empire can be used for efective visual storytelling. It is important for both the DP
Figure 3.2. (above) A wide lens expands and the director to understand how lenses can be used and what efects can
the space and helps form the composi- be achieved for particular visual purposes in the story. In this discussion,
tion and physical relationships in this
shot from Once Upon a Time in the West it includes also the placement of the lens which is an important directo-
rial decision in framing up any shot—placement works together with the
optical characteristics of the lens in creating the overall efect. The key
optical aspects of a lens include:
• Perspective
• Compression or expansion
• Soft/hard
• Contrast
• Lens height
• The Lens and the Frame
38..cinematography:.theory.and.practice.
As we use the term in this book, cinematic technique means the meth- Figure 3.3. In the foreground, Sherif
ods and practices we use to add additional layers of meaning, nuance, and Brody learns about the shark attack In
the midground we see his family, the
emotional context to shots and scenes in addition to their objective con- people he is dedicated to protecting
tent. The lens is one of the prime tools in achieving these means. Together In the far background is the sea, where
danger lurks
with selecting the frame, it is also the area of cinematography in which the
director is most heavily involved. Understanding what a lens can do for
your storytelling is vital not only to cinematographers but to directors as
well. Choosing the frame and what lens to use is the director’s decision; in
fact one of most important choices.
FOREGROUND/MIDGROUND/BACKGROUND
As we discussed in The Frame, one of the key elements of flm is that we are
projecting three-dimensional space onto a two-dimensional plane. Except
where we want this fatness, it is a goal to re-create the depth that existed
in the scene.
A big part of this is to create shots with a foreground, midground, and
background (Figures 3.2, 3.3 and 3.4). In the book Hitchcock/Trufaut,
Hitchcock makes the point that a basic rule of camera position and staging
is that the importance of an object in the story should equal its size in the
frame.
LENS PERSPECTIVE
As we discussed in the previous chapter, the fundamental aspect of the
frame is that it constitutes a selection of what the audience is going to
see. Some things are included, and some are excluded. The frst decision
is always where the camera goes in relation to the subject, but this is only
half of the job. Once the camera position is set, there is still a decision to
be made as to how much of that view is to be included. This is the job of
lens selection.
Human vision, including peripheral, extends to around 180°. Foveal
(or central) vision, which is more able to perceive detail, is around 40°.
In 35mm flm, the 50mm is generally considered the normal lens. In
fact, something around a 40mm is closer to typical vision. In video, the
“normal” lens varies depending on the size of the video receptor—16mm
flm, 70mm, and all the others would have a diferent “normal,” as do all
video sensor sizes and formats. A normal lens is considered to be one where
the focal length equals the diagonal of the receptor (the flm frame or the
video sensor). The focal length is signifcant in another way in addition to
its feld of view. Remember that all optics (including the human eye) work
by projecting the three-dimensional world onto a two-dimensional plane.
Lenses in the normal range portray the depth relationships of objects in a
way fairly close to human vision.
the.language.of.the.lens..39.
Figure 3.4. A deep focus shot from Citi- THE LENS AND SPACE
zen Kane Three levels of the story are
shown in the same frame With a wider than normal lens, depth perception is exaggerated: objects
appear to be farther apart (front to back) than they are in reality. This
exaggerated sense of depth has psychological implications. The percep-
tion of movement towards or away from the lens is heightened; space
is expanded, and distant objects become much smaller. All this can give
the viewer a greater sense of presence—a greater feeling of being in the
scene—which is often a goal of the flmmaker. As the lens gets even wider,
there is distortion of objects, particularly those near the lens. This is the
fundamental reason why a longer focal length lens is considered essential
for a portrait or head shot. It’s a simple matter of perspective. If you are
shooting a close-up and you want to fll the frame, the wider the lens, the
closer the camera will have to be. As the camera gets closer, the percent-
age diference in distance from the nose to the eyes increases dramatically,
which causes distortion.
For example, if the tip of the nose is 30 cm (centimeters) from the lens,
then the eyes may be at 33 cm, a 10% diference. With a wide lens, this is
enough to cause a mismatch in size: the nose is exaggerated in size com-
pared to the face at the plane of the eyes. With a longer than normal lens,
the camera will be much farther back to achieve the same image size. In
this case, the tip of the nose might be at 300 cm, with the eyes at 303 cm.
This is a percentage diference of only 1%: the nose would appear normal
in relation to the rest of the face. The same fundamental principle applies
to the perception of all objects with very wide lenses.
Another aspect of wide lenses is that at a given distance and f/stop, they
have greater depth-of-feld. We’ll get into the details in later chapters, but
perceptual ramifcations are very much a part of the psychology of the
lens. This greater depth-of-feld allows more of the scene to be in focus.
This was used to great efect by Gregg Toland, who used it to develop
an entire look called deep focus, such as in the frame from Citizen Kane
(Figure 3.4).
40..cinematography:.theory.and.practice.
This deep focus facilitates composition in depth to an unprecedented Figure 3.5. (top) Very long lens per-
degree. Throughout the flm we see action in the background that com- spective makes this shot from Rain Man
abstract It is reduced to the simple
plements and amplifes what we are seeing in the foreground. For example, idea of beginning a journey into the
early in the flm, we see Mrs. Kane in the foreground, signing the agree- unknown future; the road seems to rise
up into their unknown future It is no
ment for Mr. Thatcher to be the young Charles Foster Kane’s guardian. accident that this frame is used on the
Throughout the scene, we see the young man through a window, playing poster for the flm; it elegantly expresses
the basic story of the flm
outside with his sled even as his future is being decided.
Welles also uses the distortion of wide-angle lenses for psychological Figure 3.6. (above) A wide lens is
essential to this shot from a later scene
efect. Frequently in the flm we see Kane looming like a giant in the in Rain Man. Trapped in the car with his
foreground—a metaphor for his powerful, overbearing personality. Later, extremely annoying brother, the wide
shot in the emptiness of the prairie
he uses the exaggerated distances of wide lenses to separate Kane from emphasizes how the car is like a lifeboat
other characters in the scene, thus emphasizing his alienation. A dramatic from which there is no escape
the.language.of.the.lens..41.
Figure 3.7 A world out of whack, power example of the use of diferent focal lengths for their diferent efect is the
perverted, and morality scrambled is punch-in—where a scene is shot in a fairly wide angle and then there is a
portrayed by an extreme wide angle
lens that distorts space in this scene straight cut to a very long lens; the efect can be stunning (Figures 3.8, 3.9,
from The Favourite 3.10, and 3.11).
COMPRESSION OF SPACE
At the other end of this spectrum are long focal length lenses, which you
might hear referred to as telephoto lenses. They have efects that are oppo-
site of wide lenses: they compress space, have less depth-of-feld, and de-
emphasize movement away from or toward the camera.
This compression of space can be used for many perceptual purposes:
claustrophobic tightness of space, making distant objects seem closer,
and heightening the intensity of action and movement. Their ability to
decrease apparent distance has many uses both in composition but also in
creating the psychological space (Figures 3.1, 3.5, and 3.9).
The efect of having objects seem closer together is often used for the
very practical purpose of making stunts and fght scenes appear more dra-
Changing lenses for the amount of matic and dangerous than they really are. With careful camera placement
the information the lens gathers and a long lens, a speeding bus can seem to miss a child on a bicycle by
(its ‘feld’) is only a partial use of a
lens Lenses have diferent feelings inches, when in fact, there is a comfortably safe distance between them,
about them Diferent lenses will a trick frequently used to enhance stunt shots and action sequences. The
tell a story diferently limited depth-of-feld can be used to isolate a character in space. Even
Sidney Lumet though foreground and background objects may seem closer, if they are
(12 Angry Men, drastically out of focus, the sense of separation is the same. This can result
Network, The Verdict) in a very detached, third-person point-of-view for the shot. This detach-
ment is reinforced by the fact that the compression of space makes more
tangible the feeling that the real world is being projected onto a fat space.
We perceive it more as a two-dimensional representation—more abstract;
this is used very efectively in Figure 3.1.
Another use of long lenses for compression of space is for beauty. Most
faces are more attractive with longer lenses. They are known as portrait
lenses for still photographers who do beauty and fashion or portraiture.
Movement toward us with a long lens is not as dynamic and, therefore, is
abstracted. It is more of a presentation of the idea of movement than per-
ceived as actual movement of the subject. This is especially efective with
shots of the actors running directly toward the camera; as they run toward
us, there is very little change in their image size. We would normally think
42..cinematography:.theory.and.practice.
of this as decreasing the sense of movement, but in a way, it has the Figure 3.8. (top) This wide shot comes
opposite efect. The same is true of slow motion. Although shooting at the end of a chase scene in 9 1/2
Weeks; the characters have been chased
at a high frame rate actually slows the movement down, our perceptual by a gang of violent thugs
conditioning tells us that the people or objects are actually moving very
fast—so fast that only high-speed shooting can capture them on flm. Figure 3.9. (above) At the moment they
realize they have lost their attacker, a
Thus shooting something in slow motion and with a long lens has the severe lens change punches-in to the
ultimate efect of making the movement seem faster and more exagger- scene It is a high-energy cut that gets
us closer so that we are experiencing
ated than it really is. The brain interprets it in a way that contradicts the the scene along with the characters
visual evidence. This is an excellent example of cultural conditioning as rather than as an abstract, at-a-distance
a factor in flm perception. The convention is to show someone running chase scene We are drawn into their
excitement and identify with their exu-
very fast to shoot with a long lens and in slow motion. If you showed a berance The sudden loss of depth-of-
long-lens, slow-motion shot of someone running to a person who had feld isolates them in the landscape and
never seen flm or video before, they might not understand at all that gives our attention nowhere else to go
The punch-in changes the visual texture
the person is running fast. More likely they would perceive the person to match the mood
as almost frozen in time through some sort of magic.
the.language.of.the.lens..43.
Figure 3.10. (top) A visually powerful SELECTIVE FOCUS
punch-in from Gladiator, as the main
characters rise into the arena from the The characteristic of relative lack of depth-of-feld can be used for selective
underground in a wide shot focus shots (Figures 1.1 and 3.12). As discussed above, shallow depth-of-
Figure 3.11. (bottom) The switch to a feld can isolate the subject (Figures 3.13 and 3.14). The essential point is
very long lens punctuates the moment that focus is a storytelling tool. This is a drawback of 16mm flm and some
and intensifes the drama as well as HD/UHD cameras. Because they often have smaller sensors, they have far
simply being dramatic and visually
striking more depth-of-feld than 35mm flm, thus making it more difcult to use
focus in this way; however, many digital cameras now have sensors that
are the same size as a 35mm flm frame or even larger. Depth-of-feld is a
product of focal length, the aperture, and the sensor size, not whether it is
flm or video. See the chapter Optics & Focus for more on selective focus. If
you want to reduce depth-of-feld on a camera with a smaller sensor, some
people will say “pull back and use a longer lens” or “shoot wide open.”
These are not always options, especially in a tight location.
Focus can also be shifted during the shot, thus leading the eye and the
attention of the viewer. The term for this is rack focus (Figure 3.12), in
which the focus is on an object in the foreground, and then the camera
assistant radically changes the focus so that it shifts dramatically to another
subject either in front of or behind the original subject. Not all shots lend
themselves to the technique, especially when there is not enough of a
44..cinematography:.theory.and.practice.
focus change to make the efect noticeable. A downside of rack focusing is Figure 3.12. Rack Focus is an essential
that some lenses breathe when changing focus; this means they appear to part of the language of cinema A sense
of timing is critical to executing a proper
change focal length while shifting focus. Also with tracking shots that are rack focus that reinforces the scene and
very tight and close, we can watch as objects come into view, then slowly doesn’t call attention to itself You are
guiding the audience’s eye; it’s not a
come into focus, then go soft again. Selective focus and out-of-focus can purely mechanical thing Like a dolly
also be highly subjective visual metaphors for the infuence of drugs or move, you don’t want to arrive too early
madness. The bottom line is that focus is an important storytelling tool a or too late Another reason rehearsals
are such a good idea
part of the overall look of a particular production.
Another issue in selective focus is when two or more players are in the
same shot but at diferent distances. If you don’t have enough light to set
the lens to a higher f/stop (and thus you don’t have much depth-of-feld),
it may be necessary for the focus puller to choose one or the other to be in
focus. This is up to the DP or director to decide, and they should consult
before the shot—and don’t forget to let the focus puller know. A few basic
rules of thumb:
the.language.of.the.lens..45.
• Focus goes to the person speaking. It is permissible to rack focus
back and forth as they speak.
• Focus goes to the person facing the camera or most prominent in
the frame.
• Focus goes to the person experiencing the most dramatic or emo-
tional moment. This may countermand the principle of focusing
on the person speaking.
If there is doubt about whom to focus on, most camera assistants put
the focus on the actor who has the lower number on the call sheet. This
may sound frivolous but, it’s not. Call sheets list the actors in numbered
order of their characters. The lead is actor #1, and so on. If they are close
enough, the focus puller may split the focus between them (if there is
enough depth-of-feld to keep both of them acceptably sharp) or by very
subtly racking back and forth. Major focus racks need to be discussed in
advance and rehearsed. This is true of all camera moves that are moti-
vated by dialog or action. If the AC and the operator haven’t seen what the
actors are going to do, it is difcult to anticipate the move just enough to
time it correctly. Rehearsal is a time saver, as it usually reduces the number
of blown takes.
It is interesting to note that older books on cinematography barely men-
tion focus at all. There is a reason for this. Until the sixties, it was the
Figure 3.13. (top) Deliberate lens fare
is an essential part of the look of this established orthodoxy that pretty much everything important in the
shot from a popular Christmas movie frame should be in focus. The idea of having key elements in the frame
that are deliberately out of focus really didn’t fully take hold until it was
Figure 3.14. (above, middle) A normal
lens keeps the background in focus; it popularized by fashion photographers in the eighties. It is now recognized
can be distracting by flmmakers as a key tool of flmmaking. More about factors that afect
focus and depth-of-feld in Optics & Focus, later in this book.
Figure 3.15. (above, bottom) A very
long lens throws the background out
of focus and the viewer’s attention is FLARE/GLARE
drawn to the character The lens also A direct, specular beam of light that hits the lens will create a fare that
needs to be as wide open (lowest f/
number possible) In a daylight situa- creates veiling glare, which appears as a sort of milky whiteness over the
tion like this one, this means shooting at whole image. This is why so much attention is paid to the matte box or
a low ISO and probably using a Neutral lens shade and why the grips are often asked to set lensers—fags that keep
Density flter See the chapters on Expo-
sure, Optics & Focus, and Image Control direct light sources from hitting the lens. There are exceptions, such as in
for more details on this issue Figure 3.13, where deliberate fare is used as a visual device to set a certain
tone for the shot.
46..cinematography:.theory.and.practice.
04
continuity
Figure 4.1. A well-known example of CONTINUITY
an error in continuity—in this shot from
Pulp Fiction, the bullet holes are already Continuity is perhaps the most important element of invisible technique:
in the wall, although the guy hiding in it’s all about maintaining the illusion by avoiding jarring cuts that will take
the bathroom has not yet fred at them the audience out of the fow of the story.
This is the kind of thing the general
public thinks of when they hear the Without continuity, a flm becomes a series of unnatural jarring moments
term continuity, but as we’ll see there is that takes a viewer out of the illusion and distracts them from the story.
much more to it than this This particu-
lar example is the type that the audi- A planned lack of continuity can be a useful technique to create tension
ence is almost never going to notice, and confusion in a scene but be very careful not to overdo it. Again, it’s
but this is not to say that continuity isn’t about organizing the material for the brain—to make it understandable,
important, just that you can sometimes
get away with some small mistakes but understandable in the way we want it to be understood—this is the
primary goal of editing a narrative flm and thus it has to be a primary goal
of how we go about shooting the original material.
SHOOTING FOR EDITING
Filming is ultimately shooting for editorial. The primary purpose of shoot-
ing is not merely to get some “great shots”—in the end it must serve the
purpose of the flm by giving the editor and the director what they need
to actually piece together completed scenes and sequences that add up to a
fnished product that makes sense, has emotional impact, and accomplishes
its purpose. Cutting on action is a frequently used method for ensuring a
seamless edit (Figures 4.2 and 4.3); it is essential that the director and DP
be aware of these issues while shooting.
THINKING ABOUT CONTINUITY
Movies get made one scene at a time, and scenes get made one shot at a
time. No matter how large and complex a production is, you are always
still doing it one shot at a time. As you do each shot, you have to keep the
overall purpose in mind: that this shot must ft in with all the other shots
that will make up the fnal scene. Continuity is a big issue in flmmaking.
It’s something we have to be aware of at all times. Continuity mistakes
can easily render several hours of shooting worthless or can create huge
problems in editing. So what is continuity? Basically, continuity means
a logical consistency of the story, dialog, and picture so that it presents
the appearance of reality. Here’s a simple example: in a wide shot, he is
not wearing a hat. Then we immediately cut to a close-up and he is wear-
ing a hat. It appears to the viewer as if a magic hat suddenly appeared on
his head. When the audience is aware of continuity errors, it makes them
aware they are watching a movie; it breaks the illusion.
Although continuity is primarily the job of the director and the script
supervisor, it is very important that the director of photography has a
thorough understanding of the principles of continuity and how to go
about making sure the footage is “cuttable,” meaning that it is material
that the editor can use to put together the best scenes possible.
48..cinematography:.theory.and.practice.
TYPES OF CONTINUITY Figure 4.2. (left) Cutting on action is an
often used method for ensuring smooth
There are several categories of continuity and each has its own challenges editorial continuity of a scene Here he
and possibilities. They are: is starting to sit—he’s in motion
• Content Figure 4.3. (right) In the next frame
from a diferent angle, he is still in the
• Movement same motion, resulting in a seamless
edit
• Position
• Time
CONTINUITY OF CONTENT
Continuity of content applies to anything visible in the scene: wardrobe,
hairstyle, props, the actors, cars in the background, the time set on the
clock. As we will talk about in Set Operations, it is the script supervisor in
conjunction with the various department heads who must ensure that all
of these items match from shot to shot.
These kinds of problems extend from the very obvious—she was wear-
ing a red hat in the master, but now it is a green hat in the close-up—to
the very subtle—he was smoking a cigar that was almost fnished when
he entered and now he has a cigar that is just started. While the script
supervisor, on-set wardrobe, and prop master are the frst line of defense
in these matters, it is still up to the director and camera person to always
be watchful for problems.
As with almost anything in flm there is a certain amount of cheating that
is possible; flmgoers can be very accepting of minor glitches. Absolutely
perfect continuity is never possible, and there is a large gray area.
HEAD POSITION
One type of continuity that plays a large role in the editor’s decision about
choosing the edit point between two shots of an actor is head position.
Whether the actor’s head is up or down, turned left or right is something
the audience is likely to notice. In theory, actors should always try to make
their movements in coverage match what they did in the master and then
to repeat their movements for each successive take. That’s theory, but real-
ity is not so simple, especially in scenes that involve a lot of movement by
the characters or very emotional scenes. Understandably, directors tend
to be much more concerned about the performance and the development
of the scene than they are about the actor’s head and hand movements.
This is one reason why doing more than one take of a shot that is part of
the coverage of a scene is always a good idea. It is also one of the reasons
why we shoot the coverage (mediums, close-ups, etc.) all the way from the
beginning to the end of the scene on every take.
continuity..49.
CONTINUITY OF MOVEMENT
Anything that is moving in a shot must have a movement in the next shot
that is a seamless continuation of what was begun. Whether it be opening
a door, picking up a book, or parking a car, the movement must have no
gaps from one shot to the next. This is where it is so critical to be aware of
how the shots might be cut together.
As discussed in Shooting Methods, to play it safe in shooting any type of
movement and be sure that the editor is not constricted in choices, it is
important to overlap all movement. Even if the script calls for the scene
to cut away before she fully opens the door, for example, it is best to go
ahead and let the camera roll for a few seconds until the action is complete.
Never start a shot exactly at the beginning of a movement—back up a bit
and roll into it, then let it run out at the end. One prime example of this
is the rock in. Say you shot a master of a character walking up to the bank
teller. He is there and is talking to the teller in the master. You then set up
for a close-up of him. You may know that the edit will pick up with the
character already in place, but the safe way to do it is to have the character
do the fnal step or two of walking up as shot in the close-up OTS position.
There are times, however, when focus or position is critical. It is difcult
to guarantee that the actor will hit the mark with the precision necessary
to get the shot in focus. In this case, a rock in is the way to go. The tech-
nique is simple—instead of actually taking a full step back, the actor keeps
one foot frmly planted and steps back with the other: then when action is
called, she can hit her mark again with great precision.
CONTINUITY OF POSITION
Continuity of position is most often problematic with props. Props that
are used in the scene are going to be moved in almost every take. Every-
one must watch that they start and end in the same place, or it can be an
editor’s nightmare. This is often the dividing line between a professional
performer and a beginner: it is up to the actor to place them exactly the
same in every take. If there is a mismatch in placement of a prop between
Figure 4.4. A simple and purely visual the master and an element of coverage, it is up to the director to either
establishing sequence opens The Mal-
tese Falcon It starts on the Golden reshoot one or the other or to shoot some sort of coverage that will allow
Gate Bridge (top), so we know we’re in the editor to solve the problem.
San Francisco; pulls back to reveal the
window (middle) and sign, so we know This can be done in a variety of ways. One simple example: if the actor
we’re in an ofce, and then down to the put the glass down on the left side of the table in the master, but it is on the
shadow on the foor (bottom) that intro-
duces the name of the detective agency right side in the medium, one solution is to do a shot where the actor slides
and the name of the main character It is it across the table. This solves the problem, but there is one drawback: the
an elegant visual introduction editor has to use that shot, whether it helps the scene or not. This may end
up creating more problems than it solves.
CONTINUITY OF TIME
This does not refer to the problem of resetting the clock for each take so
that it always reads the same time (that is prop continuity and comes under
continuity of content), rather it has to do with the fow of time within a scene.
If Dave North is walking away from Sam South in the wide shot, then you
cut to a close-up of Sam South; by the time you cut back to Dave North,
his action must be logical time wise. If the close-up of Sam South was for
one second, when cutting back to the wide shot, Dave North can’t have
walked ffty yards away.
THE PRIME DIRECTIVE
Most of the techniques and rules of continuity are based on one principle:
to not create confusion in the mind of the audience and thus distract them
from the story. To a great extent, the methods we use to shoot scenes and
coverage is aimed toward this end. While things like prop continuity are
not the cinematographer’s job, maintaining correct screen direction, eye-
lines, and continuity of movement are very much up to the DP and the
director. This is particularly true when working with a frst time director
who may not have a complete grasp of how to execute coverage properly.
50..cinematography:.theory.and.practice.
Figure 4.5. (above) As long as you stay
on the same side of the line, any camera
position, any framing, any lens height,
any focal length will be OK in terms of
screen direction
Figure 4.6. (top left) Screen direction
is established by where the camera is
in relation to the subjects Once it is
established in the scene (usually by the
master shot) it should be maintained
for all shots in the scene—otherwise
the audience will be confused as char-
acters suddenly jump from one side of
THE ACTION AXIS the screen to the other
There is an imaginary axis between these two characters. In our frst Figure 4.7. (top right) With the camera
example of the car, the movement direction of the car establishes what we on his left/her right, we see her on the
call the line. In all of these diagrams, it is represented by the large dashed left side of the screen and him on the
right side
line. The line is referred to by several terms; some people call it the action
axis or the action line but most commonly, just the line (Figure 4.5). If we Figure 4.8. (below left) The characters
stay on one side of it for all our shots—everything cuts together perfectly. stay where they are, but the camera
moves to the other side of the line
If we cross over to the other side—the characters will jump to opposite
sides of the screen. In practice the camera can go nearer or farther, higher Figure 4.9. (below right) With the
and lower, in relation to the subjects; the lens focal length can change, and camera now on the other side, their
screen positions are reversed
so on—what is important is that by keeping the camera on the same side
of the line, the screen direction does not change.
SCREEN DIRECTION
Let’s take this simple two shot (Figure 4.6). From our frst camera posi-
tion, in Figure 4.7, Lucy is on the left and Ralph is on the right. Then, in
Figure 4.8, the camera position is shifted to the other side. In Figure 4.9,
the audience will see, for no reason they can understand, that Ralph is on
the left side facing right and Lucy is on the right side facing left. It will
confuse the audience. While their brains try to sort it out, their attention
will be drawn away from the story. Not only will they be distracted from
the story, but if it happens often enough, it will annoy and frustrate them.
What is it that dictates where we can put the camera to maintain good
continuity?
Another example: two people are standing on opposite sides of the street
(Figure 4.15). The woman sees the car going right (Figure 4.16). The man
sees the car going left (Figure 4.17). If we move them to the same side of
the street (4.18), they will both see the car going in the same direction in
relation to their own sense of orientation (left/right): their perception of
the car will be the same (4.19). The movement of the car establishes direc-
tion, but there is another aspect: where they are viewing the movement
from is also important; it is defned by the line, sometimes called the 180°
line or the action axis.
THESE ARE THE RULES—BUT WHY?
The basic rules of not crossing the line are well known to all working
flmmakers, but many do not stop to consider the fundamental theory and
perceptual issues that underlie this principle. It is important to understand
it at a deeper level if you are to be able to solve the trickier issues that do
not conveniently fall into one of the basic categories of this system. More
continuity..51.
SCREEN DIRECTION
What sh
e sees.
Wh
at he
see
s.
ine
eL
Th
REVERSE
The reverse is a simple technique that turns out to be difcult to understand
conceptually. A reverse is when you deliberately move the camera to the
other side of the line. But we just learned that this is a big mistake, right?
You can do it, but you have to understand how it works in order for it not
to be perceived as a mistake. To oversimplify—if you go just a little bit
over the line, the audience is going to see it as a mistake, confusing. If you
go a lot over the line, even all the way to the complete opposite of what
you have established, the audience is going to be able to understand what
has happened—the camera is just on the other side of the room, no big
deal. Just crossing the line slightly is a problem. It is when you defnitively
and unquestionably are on the other side of the line that a reverse is under-
standable. It is helpful if the background is noticeably diferent. Another
way to cross the line is to see the camera move to the other side of the line,
as in a dolly move. Then there is no confusion.
TURNAROUND
Obviously, you would never do the over-the-shoulder on one actor, then
move the camera to do the OTS on the other actor, then go back to the
frst for the close-up, and so on. This would be very inefcient and time-
consuming. So naturally you do all the coverage on one side, then move
the camera and reset the lighting. The term for this is that you shoot out
one side before you move to the other side, which is called the turnaround.
Many beginning directors call it the reverse, as in “We’re done with her
shots, now let’s shoot the reverse on him.” Technically, this is wrong, but
as long as everyone understands what is meant, it doesn’t matter. It’s also
why many people refer to it as a true reverse, to distinguish it from a mere
turnaround (Figures 4.20 and 4.21).
continuity..55.
Figure 4.22. (top) In this sequence from
High Noon, leaving town is established
as going left
Figure 4.23. (below) When the Marshall
decides he must stand and fght, we
clearly see him turn the carriage around
and head back the other way When the
carriage is moving to the right, we know
that they are going back toward town
If we didn’t see him turning around, the
fact that the carriage is now going the
opposite way would be confusing
ANSWERING SHOTS
The group of mediums, over-the-shoulders, and close-ups (the coverage) on
Figure 4.24. To be a truly neutral axis the second actor that match the ones done on the frst actor are called the
shot, the object or character must exit
the top or bottom of the frame Shots answering shots. Every shot you do in the frst setup (Figure 4.26) should
of this type can be useful in editorially have an answering shot (Figure 4.27). Answering shots need to show the
switching the scene from one side of
the line to the other—once you cut to actors as roughly the same size for them to cut together smoothly. For
a neutral axis shot, you are free to cut lens, the AC will note the lens used, the focus distance to the actor, and
back to shots on the other side of the the lens height above the foor. All of these need to match when you turn-
line
around to shoot the other actor. Figure 4.28 is from the same scene. It is
not an answering shot, but that doesn’t mean you can’t use it—it’s a per-
fectly good part of the coverage, it just doesn’t happen to be an answering
shot for Figure 4.27. It is an answering for a wider shot of the host, which
was not used in the fnal edit of the scene.
CHEATING THE TURNAROUND
In cases where some physical obstacle precludes a good camera position
for the turnaround, or perhaps the sun is at a bad angle, or there isn’t time
to relight for the turnaround, it is possible to cheat any of these cases, it
is possible to move the camera and lights only a little and just move the
actors. This is a last-ditch measure and is only used in cases where the
background for one part of the coverage is not usable or there is an emer-
gency in terms of the schedule—if, for example, the sun is going down.
56..cinematography:.theory.and.practice.
Figure 4.25 Sometimes it is necessary to cheat a turnaround If for some reason there is no time to set up and light a real
turnaround, or if the new background is somehow unusable, then it is possible to cheat the shot by merely spinning the actors
around (middle two frames)
However, if you only spin them, the camera ends up being over the “wrong” shoulder Based on the shot in the top right frame,
the answering shot should be over her right ear In the middle frames, we see it is over her left ear In the bottom two frames
we see it done correctly: the actors are reversed in their positions, but the camera is shifted to camera left so that it is now over
the correct shoulder for the cheated shot
The idea is that once we’re in tight shots, we really don’t see much of the
background. It is not correct, however, to just have them switch places. In
cheating a turnaround, you have to either move the camera a couple of
feet, or even better, just slide the foreground actor over so you are over
the correct shoulder. (Fortunately, moving the foreground actor usually
involves moving the key only a little to make sure it is on the correct side
of the face.) The key to a successful cheat is that the background either be
neutral or similar for both actors as established in any previous wide shots.
continuity..57.
Figure 4.26. The over-the-shoulder in
coverage on a scene from Joker
Figure 4.27. The answering shot for the
medium on Fleck Same lens and same
head size Matching the lens and dis-
tance from lens to subject is crucial to
getting good answering shots
Figure 4.28. This is not an answering
shot for the medium on the host How-
ever, it is a perfectly legitimate part of
the coverage of the scene, just not an
answering shot
In some cases, moving props can help establish the cheat. Also, be sure the
actor’s eyelines are correct: if she was looking camera right on the previ-
ous shot, she should be looking camera right for the cheated turnaround.
PLANNING COVERAGE
Which brings us to another key point that is easily overlooked: whenever
you are setting up a master, take a moment to think about the coverage—
this is a job for the director but also the cinematographer as the director is
not likely to be thinking about the problems of lighting or camera posi-
tions. Make sure that there is some way to position the camera for proper
answering shots. Particularly if one character’s coverage is more dramatic
or more crucial to the story than the other, it is very easy to get yourself
backed into a corner or up against an obstruction that makes it difcult or
impossible to position the camera for a proper answering shot. The back-
ground must change noticeably or be nondescript. This works best out-
58..cinematography:.theory.and.practice.
Figure 4.29. The 20% rule and the 30°
rule are pretty much the same thing,
because 20% of 180° is 36°—close
enough as they are both just very rough
guidelines What is important is not
some exact fgure, but the crucial ele-
ment is that the two shots appear dif-
ferent enough to the audience so that
they cut together smoothly You should
consider these guidelines an absolute
minimum Often they are not enough
of a change It is best to combine the
30° move with another change, such as
a diferent focal length to ensure cutta-
bility
30°
D E F
PROP CONTINUITY IN COVERAGE Figure 4.40. This master from Ronin (A)
Mistakes in prop continuity are easy to make, especially on small pro- establishes the main group and their
ductions where you may not have a full prop crew on the set or when places around the table The next cut (B)
reveals the woman at the head of the
you don’t have an experienced continuity supervisor. It is important to table (separate from the group) in a way
stay vigilant for even the smallest details, as bad continuity is something that shows her relationship to them; it
orients us to the overall arrangement
that will mark your project as “amateurish.” On the other hand, there is This is a reverse (C) This cut to a three
such a thing as “too much continuity.” Sometimes, script supervisors can shot maintains the screen direction
established in the master (A) This is a
become so obsessed with tiny details, which won’t really be apparent to good example of how seeing a piece of
the audience, that they can start to get in the way of the production. It is the table helps to keep us grounded in
the scene If it wasn’t there, we would
important to fnd a balance. The prop master is responsible for maintain- defnitely miss it (D) This shot of the girl
ing prop continuity. is outside of the group so that we are
seeing her from over a diferent shoul-
EYE SWEEPS der of the foreground actor than we
saw in (B) but it’s not a problem as the
When an of-screen character walks behind the camera, the on-screen screen direction is maintained (E) This
character may follow with her eyes. It’s perfectly OK as long as the eye single on the man in the suit also ben-
efts from seeing a piece of the table If
sweep is slightly above or below the lens. As always, it is important that it was necessary to move the table to
the actor not look directly into the lens, even for just a moment. The most get the shot, the set dressers will often
have something smaller that can stand
important thing about eye sweeps is that they match the direction and in for it (F) This shot is from the POV of
speed of the crossing character. This means that the on-screen actor will the man in the suit, but we don’t see any
of him in the foreground In this case we
move their head in the opposite direction of the movement of the crossing. are said to be “inside” him—not inside
Of course, as always, the actor must not look directly into the lens; to do his body, but inside his feld-of-vision
so breaks the fourth wall and ruins the illusion.
GROUP SHOTS
Scenes with more than three characters generally require a good deal of
coverage. Shooting a group of characters sitting around a table can be
challenging and usually takes a good deal of time to get proper coverage.
Keeping the eyelines correct is especially important. If there is a dominant
direction to the arrangement of the group, that will most likely dictate a
screen direction line based on where you shoot the master from. In prac-
tice, it may be possible to shoot from almost anywhere as long as you get
the proper answering shots and coverage. However, it may be better to
pick a line and stick to it. These frames from a group scene in Ronin illus-
trates some of these principles (Figure 4.40). Notice in particular the slight
diference between B and F. Both are shots down the middle of the axis;
B, however, is an over-the-shoulder past the man in the suit, while F does
not include him, and is more of his POV.
continuity..63.
Figure 4.41. The basic principles of an CUTAWAY EYELINE CONTINUITY
over-the-shoulder shot apply even if the
angle is more extreme than normal— Since cutaways are not part of the main scene but are physically related to
the head and shoulders of the fore- it, directional continuity must be maintained between the location of the
ground person are clearly visible; the scene and the cutaway element. This is especially important for cutaways
main subject is in focus and the eyelines
match From Kubrick’s The Killing that involve a look from the additional character, which they often do
(Figures 4.2 and 4.3).
EYELINES IN OVER-THE-SHOULDER COVERAGE
When shooting over-the-shoulder coverage, the camera height will gener-
ally be at eye level for the characters. If the two performers are of unequal
height, some modifcation may be necessary. In this case, the camera height
will approximately match that of the character over whose shoulder you
are shooting (Figure 4.41).
EYELINES FOR A SEATED CHARACTER
The same principle applies when one or more of the characters is seated or
on the foor (Figure 4.41), but with an important exception. Since shoot-
ing over the shoulder of the standing character might be an extreme angle,
it also works to keep the camera at the eye level of the seated performer,
which makes it sort of a past-the-hips shot. When there is a diference in
height or level of the characters in coverage, the eyelines may also need
some adjustment.
64..cinematography:.theory.and.practice.
05
shooting methods
Figure 5.1. An establishing shot from WHAT IS CINEMATIC?
Skyfall Not just establishing the place
(London) but also the character in the It’s easy to think of flmmaking as not much more than “We’ll put the
location, and shows us an important actors on the set and roll the camera.” Obviously, there is much more
story point—Bond is back! involved, but it’s important to understand that even if all you do is record
what is in front of the camera, you are still making defnite decisions about
how the viewers are going to perceive the scene. This is the crucial point:
ultimately, flmmaking is about what the audience “gets” from each scene,
not only intellectually (such as the plot) but also emotionally and perhaps
most importantly, how it contributes to their understanding of the story.
A QUESTION OF PERCEPTION
First of all, we have to recognize that how we perceive the world in a flm
is fundamentally diferent from how we perceive the world with our eyes
and ears. Film only presents the illusion of the reality, and a big part of our
job is to sell that illusion.
What do we mean when we say something is cinematic? Most of the time,
when people say a novel or a play is “cinematic” they mean it is fast-paced
and visual. Here, we use it in a diferent way; in this discussion we use the
term to mean all the techniques and methods of flmmaking that we use to
add layers of meaning to the content.
HOW FILM IS DIFFERENT FROM THEATER
In the theater, the audience views the play from one angle. If an actor is on
the left, she or he remains on the left until we see them move somewhere
else. The perspective of the audience never changes.
In the early days of cinema, many practitioners were theatrical people.
When they frst saw the movie camera, they conceived it as a tool to
extend their audience: they just put the camera where the audience would
be and used it to record a performance. The upshot of this is that the
entire performance is viewed from a single point-of-view, which is how a
theatergoer sees a play. As a result, in early flms the camera did not move,
there were no close-ups, no shifting point-of-view—practically none of
the tools and techniques of cinema as we know them now.
In short, these early flms depend almost entirely on their content, just
as theater does, but they lack the immediacy and personal experience of
a live theater performance. The history of cinema can easily be studied as
the introduction and addition of various techniques and methods that we
call “cinematic”—in other words, the conceptual tools we referred to in
the previous chapters: the frame, the lens, light and color, texture, move-
66..cinematography:.theory.and.practice.
ment, establishing, and point-of-view. In this chapter, we will deal pri- Figure 5.2. Kubrick often uses a static
marily with the frame and another essential tool: editing. While editing frame to convey basic ideas about the
social structure of the situation, as in
is not the cinematographer’s job, it is critical to understand that the job of this shot from Barry Lyndon This scene
the DP and director working on the set is to provide the editor with foot- is an in-one—the whole scene done in
one shot
age that he or she can use creatively and efectively.
THE FRAME
Setting the frame is a series of choices that decide what the viewer will
see and not see. The frst of these decisions is where to place the camera
in relation to the scene. After that, there are choices concerning the feld
of vision and movement, all of which work together to infuence how
viewers will perceive the shot: both in outright content and in emotional
undercurrent and subtext to the action and the dialog.
STATIC FRAME
A static frame is a proscenium. The action of the scene is presented as a stage
show: we are a third-person observer. There is a proscenium wall between
us and the action. This is especially true if everything else about the frame
is also normal—that is, level, normal lens, no movement, and so on. This
does not mean, however, that a static frame is not without value. It can
be a useful tool that carries its own baggage and implications of POV and
world view (Figure 5.1).
In Stanley Kubrick’s flm Barry Lyndon, the fxed, well-composed, bal-
anced frames refect the static, hierarchical society of the time (Figure 5.2).
Everyone has his place; every social interaction is governed by well-defned
rules. The actors move within this frame without being able to alter it. It is
a refection of the world they live in, and while it strongly implies a sense
of order and tranquility, it also carries an overpowering lack of mobility:
both social and physical. The world is static: the characters try to fnd their
place in it. Each scene is played out completely within this fxed frame:
without movement, cuts, or changes in perspective. This use of the frame
conveys a wealth of information independent of the script or the actions
of the characters. It adds layers of meaning.
shooting.methods..67.
Figure 5.3. Songs From the Second Floor A similar use of the static frame is the Swedish flm Songs from the Second
uses only static frames Every scene in Floor (Figure 5.3) which also plays out every scene (with one exception) as
the movie is done in one shot and the
camera does not move a single long take within a completely immobile frame. Jim Jarmusch used
the same technique in his second flm, Stranger Than Paradise.
In both the examples, the distancing nature of the frame is used for its
own purpose. The flmmakers are deliberately putting the audience in the
position of the impersonal observer. This can either lend an observational,
judgmental tone or much like objects in the foreground of the frame,
make viewers work harder to put themselves into the scene, or a combina-
tion of both. As with almost all cinematic techniques they can be used in
reverse to achieve a completely diferent efect than normal.
CAMERA ANGLES
The great cinematographer Michael Chapman (Taxi Driver, Raging Bull)
said this: “Angles tell us emotional things in ways that are mysterious—
emotional things that I am often unaware of. I think a particular angle is
going to do one thing, and it does something quite diferent often. Occa-
sionally you will hit on an angle that is absolutely inevitable—it’s just the
right angle. It’s puzzling and mysterious.”
THE SHOTS: BUILDING BLOCKS OF A SCENE
It is useful to think of “building” a scene. Since we make scenes one shot
at a time, we can consider that we are assembling the elements that will
make the scene. If we think of a language of cinema, these shots are the
vocabulary; how we edit them together would be the syntax of this lan-
guage. These are the visual aspects of the language of flm; there are, of
course, other properties of this language that relate more to plot struc-
ture and narrative, but here we are concerned only with the visual side of
this subject. In terminology, there are two general types of shots: framing
shots (defned by how much is included) and function shots—defned by what
purpose they serve in editing. In a by no means exhaustive list, they are:
68..cinematography:.theory.and.practice.
Figure 5.4. (top) An over-the-shoulder
shot from Casablanca
Figure 5.5. (bottom) A medium shot
from The Big Lebowski
FRAMING SHOTS
• Wide shot (or long shot)
• Full shot
• Cowboy
• Two shot
• Medium
• Close-ups
• Clean single
• Dirty single
• ECU
• Over-the-shoulder
FUNCTION SHOTS
• Establishing shots
• Cutaway
• Insert
• Connecting Shot
• Transitional Shot
shooting.methods..69.
Figure 5.7. This establishing shot from Let’s look at them individually. As with many flm terms, the defnitions
Manhattan shows us the location and are somewhat loose and diferent people have slight variations in how they
the time, but also creates the mood and
tone for the scene apply them, particularly as you travel from city to city or work in another
country; they are just general guidelines. It is only when you are lining it
up through the lens that the exact frame can be decided on.
As they appear in the script, stage directions are non-binding—it is
entirely up to the director to decide what shots will be used to put the
scene together. The screenwriter really has no say over what shots will be
used, but they are helpful in visualizing the story as you read the script—
especially if you are giving the script to people in order to secure fnancing
for the project or to actors so they can decide if they want to be involved.
These shots are the basic vocabulary we deal with—both in terms of edit-
ing and also in terms of the director communicating to the DP what it is
they are trying to do. These basic elements and how they are combined in
editorial continuity are the grammar of cinema.
WIDE SHOT
The wide shot is any frame that encompasses the entire scene (Figure 5.1).
This makes it all relative to the subject. For example, if the script desig-
nates “Wide shot—the English Countryside” we are clearly talking about
a big panoramic scene done with a short focal length lens taking in all the
eye can see. On the other hand, if the description is “Wide shot—Leo’s
room” this is clearly a much smaller shot but it still encompasses all or
most of the room.
ESTABLISHING SHOTS
The establishing shot is normally a wide shot (Figure 5.7). It is the opening
shot of a scene that tells us where we are. The scene heading in the script
might be “Ext. Helen’s ofce – Day.” This might consist of a wide shot of
an ofce building, so when we cut to a shot of Helen at her desk, we know
where we are: in her ofce building. We’ve seen that it is a big, modern
building, very upscale and expensive and that it is located in midtown
Manhattan, and the bustling activity of streets indicate it’s another hectic
workday in New York. The establishing shot has given us a good deal of
information.
CHARACTER SHOTS
There are a number of terms for diferent shots of a character. Most movies
are about people, so shots of people are one of the fundamental building
blocks of cinema.
70..cinematography:.theory.and.practice.
Figure 5.8. A full shot (head-to-toe)
from American Gangster.
FULL SHOT
Full shot indicates that we see the character from head-to-toe (Figure 5.8).
It can refer to objects as well: a full shot of a car includes all of the car. A
variation is the cowboy (Figure 5.12), which is from the top of the head to
mid-thigh, originally in order to see the six-guns on his belt.
TWO SHOT
The two shot is any frame that includes two characters. The interaction
between two characters in a scene is one of the most fundamental pieces
of storytelling; thus, the two shot is one you will use frequently. The
two characters don’t have to be arranged symmetrically in the frame. They
might be facing each other, both facing forward, both facing away from
the camera, etc., but the methods you use for dealing with this type of
scene will be the same.
MEDIUM SHOT
The medium shot, like the wide shot, is relative to the subject. Obviously,
it is closer than a full shot. Medium shots might be people at a table in a
restaurant, or someone buying a soda, shown from the waist up. By being
closer in to the action, we can see people’s expressions, details of how they
are dressed, and so on. We thus become more involved in what they are
saying and doing (Figure 5.5).
CLOSE-UPS
Close-ups are one of the most important shots in the vocabulary. There
are a number of variations: a medium close-up (Figure 5.8) would typically The best professional advice I
ever received was from Gordon
be considered as something like from top of head to waist or something Willis He stressed the importance
in that area. A close-up (CU) would generally be from the top of the head of always having a point of view
to somewhere just below the shirt pockets. If the shot is cut just above the when approaching a scene It’s the
shirt pocket area, it is often called a head and shoulders. A choker would be frst question I ask myself when I’m
from the top of the head down to just below the chin (Figure 5.13). A tight designing my coverage: what is the
point of view, or whose? Once I’ve
close-up would be slightly less: losing some of the forehead and perhaps answered this question, everything
some of the chin, framing the eyes, nose, and mouth. An extreme close-up falls into place with much more
or ECU might include the eyes and mouth only; this is sometimes called a ease
Sergio Leone after the Italian director who used it frequently. Just as often, Ernest Dickerson
an ECU is an object: perhaps just a ring lying on a desktop, a watch, and so (Malcolm X,
on. Any shot that includes only one character is called a single. Terminol- Do the Right Thing)
ogy for close-ups includes:
• Medium CU: Mid-chest up.
• Choker: From the throat up.
• Big Head CU or tight CU: Just under the chin and giving a bit of
“haircut”—cutting of just a little bit of the head.
• ECU: Varies, but usually just mouth and eyes or eyes only.
shooting.methods..71.
Figure 5.9. (top) A medium close-up
from No Country for Old Men
Figure 5.10. (second from top) A Sergio
Leone shot—Extreme Close-up (ECU)
Figure 5.11. (third from top) A dirty
single from Zodiac Not quite an over-
the-shoulder as it has very little of the
foreground person
Figure 5.12. A cowboy shot—from top
of head to mid thigh in order to show
the guns In Europe it is called the Amer-
ican shot
A close-up, medium, or full shot might also be called a clean single whenever
it’s a shot of one actor alone, without anything else in the foreground. If
we do include a little bit of the actor in front or even some other pieces of
objects in the foreground, it’s often called a dirty single (Figure 5.11). This is
not to be confused with an over-the-shoulder (below), which includes more
of the foreground actor.
72..cinematography:.theory.and.practice.
OVER-THE-SHOULDER
A variation of the close-up is the over-the-shoulder or OTS (Figure 5.4),
looking over the shoulder of one actor to a medium or CU of the other
actor. It ties the two characters together and helps put us in the position of
the person being addressed. Even when we are in close shot of the person
talking, the OTS keeps the other actor in the scene. An OTS contains
more of the foreground actor than a dirty single and their position in the
frame is more deliberate. A variation is a dirty single (Figure 5.11), which
just shows a little bit of the person in the foreground, not part of the head
and the shoulder as you would see in a normal OTS.
CUTAWAYS
Figure 5.13. (top) A tight close-up from
A cutaway is any shot of some thing or person in the scene other than the Apocalypse Now; sometimes called a Big
main characters we are covering but that is still related to the scene (Figure Head close-up or a choker In all tighter
5.14). The defnition of a cutaway is that it is something we did not see close-ups, it’s OK to give them a “hair
cut ”
previously in the scene, particularly in the master or any wide shots.
Examples would be a cutaway to a view out the window or to the cat Figure 5.14. (above) A cutaway from
91/2 Weeks A shot of the cat is not really
sleeping on the foor. Cutaways may emphasize some action in the scene, part of the scene, but it adds mood and
provide additional information, or be something that the character looks tone
at or points to. If it is a shot of an entirely diferent location or something
unrelated to the scene, then it is not a cutaway, but is a diferent scene.
An important use of cutaways is as safeties for the editor. If the editor is
somehow having trouble cutting the scene, a cutaway to something else
can be used to solve the problem. A good rule of thumb is in almost every
scene you shoot, get some cutaways, even if they are not called for in the
script or essential to the scene—a cutaway might save the scene in editing.
REACTION SHOTS
A specifc type of close-up or medium is the reaction shot. Something hap-
pens or a character says something and we cut to another person reacting
to what happened or what was said; it can be the other person in the dialog
or someone elsewhere in the scene. A reaction shot is a good way to get a
safety cutaway for the editor. Sometimes the term just refers to the other
side of the dialog, which is part of our normal coverage. Reaction shots
are very important and many beginning flmmakers fail to shoot enough
of them. Silent flms were the apex of reaction shots as they understood
that it is when you see the facial and body language reactions of the lis-
tener that you get the entire emotional content of the scene. Reaction
shots may not seem important when you are shooting the scene, but they
are invaluable in editing.
shooting.methods..73.
Figure 5.14. The scene starts with a
medium on Henry Hill (Goodfellas)
Figure 5.15. A cutaway to the sausages
helps the editor maintain the pacing of
the scene
Figure 5.16. From the cutaway, the
editor cuts back to Paul Cicero, the
other character in the scene
INSERTS
An insert is an isolated, self-contained piece of a larger scene. Inserts are
sometimes called cut-in shots. To be an insert instead of a cutaway, it has
to be something we saw in the wider shots. Example: she is reading a book.
We could just shoot the book over her shoulder, but it is usually hard to
read from that distance. A closer shot will make it easy to read. Unlike
cutaways, many inserts will not be of any help to the editor. The reason for
this is that since an insert is a closer shot of the larger scene, its continuity
must match the overall action. For example, if we see a wide shot of the
cowboy going for his gun, a tight insert of the gun coming out the holster
must match the action and timing of the wider shot; this means it can be
used only in one place in the scene and won’t help the editor if they need
to solve a problem elsewhere in the scene. Inserts tend to ft into a few
general categories:
74..cinematography:.theory.and.practice.
Figure 5.17. (above) An insert of the
clock conveys important story infor-
mation and emphasis in this shot from
High Noon
Figure 5.18. (left, top) This scene from
Barton Fink is an excellent use of inserts
to give us not only basic information
but a feel for how run down and creepy
the hotel is This two shot establishes the
scene
Figure 5.19. (left, middle) An insert
reveals his dirty fngernails and the
rusted bell
Figure 5.20. (left, bottom) An insert
of the book adds to the texture of the
scene
76..cinematography:.theory.and.practice.
CONNECTING SHOTS
Most scenes involving two people can be adequately edited with singles
of each person; whether they are talking to each other or one is viewing
the other from a distance, such as a shot of a sniper taking aim at some-
one. This is sometimes called separation. There is always a danger, however,
that it will seem a bit cheap and easy and the fact that it is an editing trick
might somehow undermine the scene. Any time the scene includes people
or objects that cannot be framed in the same shot at some point in the
scene, a connecting shot may be a good idea although it is by no means
absolutely necessary. This applies especially to point-of-view shots where
the character looks at something, then in a separate shot, we see what she
is looking at; but it also applies to any scene where two or more people are
in the same general space, whether they are aware of each other or not. A
connecting shot is one that shows both of the characters in one shot (Fig-
ures 5.22 through 5.25).
REVEALS
A reveal is an important device. It is a technique, not a specifc type of shot.
It is when some person or object is hidden from the audience and then
shown to them. An especially beautiful reveal is shown in Figure 5.21.
PICKUPS
A pickup can be any type of shot, master or coverage, where you are start-
ing in the middle of the scene (diferent from previous takes where you
started at the beginning as it is written in the script). You can pick it up Figure 5.22. (top) This scene from Sky-
only if you are sure you have coverage to cut to along the way. A “PU” is fall consists of Bond and a villain on top
added to the scene number on the slate so the editor will know why they of a train and Moneypenny on a nearby
hilltop—the frst three frames above are
don’t have a complete take of the shot. sometimes called separation—individ-
Another use of the term is a pickup day. This is one or several days of ual parts of a scene such as a close-up
or medium shot showing only one side
shooting after the flm is already in editing. At this point, the director and of the action
editor may realize that there are just a few shots here and there that they
absolutely must have in order to make a good edit. Figure 5.23. (second down) A choker
(tight CU) of Moneypenny aiming
TRANSITIONAL SHOTS Figure 5.24. (third down) Combin-
Some shots are not parts of a scene themselves but instead serve to connect ing the close-up on Moneypenny with
her optical POV of the men on the train
two scenes together. We can think of these as transitional shots. Some are would make the scene play fne—no
simple cutaways: a scene ends, cut to a shot of a sunset and then into the need to have her and the actors on the
next scene. There are many other types of transitional shots as well, they train in the same location or even the
same country while shooting, the audi-
are a sort of visual code to viewers that the scene is ending. Scenes of the ence will accept it
city, a sunset, or landscape are frequently used as transitional devices.
Figure 5.25. (bottom) Shooting a
connecting shot really ties the scene
INVISIBLE TECHNIQUE together and makes it more real and
In almost all cases, we want our methods to be transparent—we don’t more efective than using separation
want the audience to be aware of them. We are striving for invisible tech- shots only
nique. Camera movement, lighting, set design, even actor’s behavior that
makes viewers aware that they are watching a movie all distract from their
following the story, just as seeing a Starbucks cup on the table in an 18th-
century drama would. There are a few obvious exceptions such as when
a character talks directly to the camera, but these are deliberate and are
called “breaking the fourth wall,” which we’ll talk about in a bit.
INVOLVING THE AUDIENCE: POV
Recall the three forms of literary voice: frst person, second person, and third
person. In frst-person storytelling (whether in a short story, novel, or Learn the rules before you try to
in a flm), a character in the story is describing the events. She can only bend or break them You need a
foundation on which to build
describe things that she sees. First person speaks as “I.” Such as “I went to
the zoo.” Second person speaks as “you,” as in “You went to the zoo.” It is Douglas Slocombe
someone who is not the speaker but who is part of the conversation. Third (The Italian Job, Raiders of the
Lost Ark, The Servant)
person, on the other hand, speaks about “they,” as in “They went to the
zoo.” Third person is completely objective, and frst person is completely
subjective.
shooting.methods..77.
Figure 5.26. (top) Executing a subjec-
tive POV requires some attention to
detail When James Stewart looks out
the window, he establishes that he is
looking at something and the direction
of his eyeline shows us where he is look-
ing If he were looking up at the sky, the
following shot would make no sense
Figure 5.27. (below) The payof of the
POV is to show what he is looking at If
the cars where shot from ground level,
the POV would make no sense
Coverage: Coverage:
Over-the-Shoulder Over-the-Shoulder
On Her On Him
Coverage: Coverage:
Clean Single Clean Single
On Her On Him
Coverage: Coverage:
Close-up Close-up
On Her On Him
shooting.methods..81.
• If characters leave, make sure they exit entirely, leaving a clean
frame. Continue to shoot for a beat after that.
• Consider using transitional shots to get into or out of the scene.
• Shoot all the shots on one side before moving to the other side of
the scene. This is called shooting out that side.
If you know you are going to use mostly the coverage when you edit, you
may be able to live with some minor mistakes in a master.
TURNAROUND
Obviously, you would never do the over-the-shoulder on one actor, then
move the camera to do the OTS on the other actor, then go back to the
frst for the close-up, and so on. This would be very inefcient and time-
consuming. So naturally you do all the coverage on one side, then move
the camera and reset the lighting. The term for this is that you shoot out
one side before you move to the other side, which is called the turnaround.
These days often the DP will light whole set and the director can freely
moves, use multiple cameras, etc. But the concept of shooting out the side
is a good general practice. Many directors and DPs call it the reverse, as in
“We’re done with her shots, now let’s shoot the reverse on him.” Techni-
cally, this is wrong, but as long as everyone understands what is meant, it
doesn’t matter. Many people refer as a true reverse, to distinguish it from a
turnaround. If the director calls it a reverse, just go with it.
ANSWERING SHOTS
The group of mediums, over-the-shoulders, and close-ups (the coverage) on
the second actor that match the ones done on the frst actor are called the
answering shots. Every shot you do in the frst setup should have an answer-
ing shot. In cases where some obstacle precludes a good camera position for
the turnaround, or perhaps the sun is at a bad angle, it is possible to cheat.
CHEATING THE TURNAROUND
In any of these cases, it is possible to move the camera and lights only a
little and just move the actors. This is a last-ditch measure and is only used
in cases where the background for one part of the coverage is not usable or
there is an emergency in terms of the schedule—if, for example, the sun is
going down. The idea is that once we’re in tight shots, we really don’t see
much of the background.
It is not correct, however, to just have them switch places. In cheating
a turnaround, you have to either move the camera a couple of feet, or
even better, just slide the foreground actor over so you are over the cor-
rect shoulder. (Fortunately, moving the foreground actor usually involves
moving the key only a little to make sure it is on the correct side of the
face.) The key to a successful cheat is that the background either be neutral
or similar for both actors as established in any previous wide shots. Also,
be sure the actor’s eyelines are correct.
IN-ONE AND DEVELOPING MASTER
Of all the methods of shooting a scene, by far the simplest is the in-one,
sometimes called a oner or a Developing Master, or the French term plan-
scene or plan-sequence (Figures 5.33 through 5.42). This just means the
entire scene in one continuous shot. A scene might be simple as “she picks
up the phone and talks” in which case a single shot is probably plenty.
Some in-ones can be vastly more complicated: such as the famous four-
minute opening shot of Touch of Evil (Figure 1.15).
A caution, however: when these shots work, they can be magnifcent,
but if they don’t work—for example, if you fnd in editing that the scene
drags on much too slowly—your choices are limited. If all you did was
several takes of the long in-one, you really don’t have much choice in edit-
ing. Play it safe—shoot some coverage and cutaways just in case. Another
useful tactic is when the director has one they like, do another one at a
10% quicker pace.
82..cinematography:.theory.and.practice.
THE DEVELOPING MASTER
Camera & Actor Movement The Resulting Shots
Figure 5.33. (left) The opening shot in
a typical Developing Master It makes
use of deep focus to include close fore-
ground and deep background It a
variation of an in-one, an entire scene
=
done in one continuous shot
Figure 5.34. (right) The shot as it
appears on camera
=
Figure 5.36. (right) The resulting
frame
shooting.methods..83.
Figure 5.43. The first step in shooting
a scene with the Freeform Method is
to shoot the Dialog Pass—the camera
operator does their best to pan back
1 Dialog Pass
and forth to the actor who is speaking .
Figure 5.44. Next take, the operator
pans to the person who is not speaking .
This is the Reaction Pass . It not only gets
you reaction shots, which are essential
to any scene, it also gives the editor
some ways to cut the scene for pacing,
emphasis, or to cover up mistakes .
Figure 5.45. Finally comes the Freeform
Pass . The camera operator just goes for
it, moving freely to get wide shots, over-
the-shoulders, anything . Since they’ve
been through the scene several times,
the operator will have a good feel for
where to go .
Camera pans to the
person who is speaking
3 Freeform Pass
Camera.moves
freely.to.capture.
diFerent.angles
Overlap
Overlap
Camera Position #2
Camera Position #3
Camera Position #3
FREEFORM METHOD
Many scenes theses days (and even entire movies) are shot in what is com-
monly called documentary style—the camera is handheld, loose, and the
actor’s movements don’t seem pre-planned (Figures 5.43 to 5.45). It seems
like documentary style and many people call it that, but it is not really.
When shooting a real documentary, we can almost never do second takes,
or have them repeat an action. Our aim in shooting fiction scenes like this
is to make it seem like a documentary. In most cases, scenes like this are
shot several times with the actors repeating the scene for several takes. The
camera operator pans back and forth to always be on the person who is
speaking. This can be a disaster for the editor. If you shoot three times like
this, you end up with three takes that are almost the same and the camera
is only on the actor who is talking. Editing is all about having different
angles to cut to. If all you have is three almost identical takes, there are no
different angles to cut to. Also, you have no reaction shots; reaction shots
are important to any dialog scene.
ShooTing The FreeForm meThod
Freeform works well for providing the editor with lots of cuttable mate-
rial and is also very efficient in shooting:
• On the first take, follow the dialog. Do your best to stay with the
actor who is speaking. This is the dialog pass.
• On the next take, pan back and forth to stay with the person
who is not talking. This will give you lots of good reaction shots,
which are important. It will also give the editor lots of things to
cut away to. This is the reaction pass.
• For the third take (if you do one) improvise: follow the dialog
sometimes, go to the nonspeaking actor sometimes, occasionally
back up to get a wide shot—whatever seems appropriate. This is
the freeform pass.
All these together will give you a scene you can cut together smoothly and
give the editor lots of flexibility to cut the scene in various ways and to
tighten up parts that seem to be dragging.
shooting methods 85
OVERLAPPING OR TRIPLE-TAKE METHOD
The overlapping method is sometimes called the triple-take method. It is not
used as much as other ways of shooting a scene, but it is important and it
is especially useful in understanding the concepts of continuity. Having
action overlap is essential to many ways of shooting a scene (Figure 5.46).
An example: a lecturer walks into a room, sets his notes on the lectern,
pulls up a chair and sits down. This is where the overlapping part comes in.
You could get a wide shot of him coming in, then ask him to freeze while
you set up for a closer shot of him putting the notes on the lectern, then
have him freeze again while you set up a shot of him pulling up the chair.
What you will discover is that the shots probably won’t cut together
smoothly. The chance of fnding a good, clean cutting point is a long shot.
It is the overlapping that helps you fnd smooth cut points. Here is what
will work much better: you get a wide shot of him walking in and let him
take the action all the way through to putting the notes on the lectern.
Then set up a diferent angle and ask the actor to back up a few steps. Once
you roll the camera, the actor comes up to the lectern again (repeating the
last part of his walk). You then shoot the action all the way through to
pulling up the chair.
Again you set up a diferent angle, and have the actor back up from the
lectern, and repeat the action of putting down the notes and then carry-
ing it on through to the end of the scene. All this overlapping will enable
you to cut the action together smoothly with continuity cuts. The most
important principle to take from this is to always overlap all action, no matter
what shooting method you are using. Giving the editor some extra overlap
at the beginning or end of any shot will prevent many potential problems
when editing the scene. Again, it is important to remember that this is one
of our primary responsibilities—making sure all the footage is cuttable.
WALK-AND-TALK
Another variation is the walk-and-talk (Figure 5.47). It’s pretty easy—the
Figure 5.47. (top) Shooting the Walk- camera just follows along and two or more people walk-and-talk. The
and-Talk You can flm from any angle:
leading them, following, alongside It is camera can be leading them, alongside them, or following them. It’s wise
usually best to shoot several angles to to shoot several angles. If you only shoot one angle (such as leading them),
give the editor shots to cut to then you are stuck with using that take from start to fnish, no matter what
Figure 5.48. (above) Frames from the problems may show up in the editing room—you have nothing to cut to.
opening sequence of Midnight in Paris,
shot in soft warm tones by Darius MONTAGE
Khondji It is a montage because it is
a series of shots connected only by a There is a special form of editing used in dramatic narrative flmmaking
common theme that does not aim for continuity at all; this is called montage. A montage is
simply a series of shots related by theme. In this montage from Midnight
In Paris (Figure 5.48) the theme is Paris in the rain—it’s not just a beautiful,
lyrical sequence, it actually relates to a theme of the movie.
86..cinematography:.theory.and.practice.
ESTABLISHING THE GEOGRAPHY Figure 5.49. This establishing shot from
Saving Private Ryan shows us at a glance
A phrase that is often used is that we have to “establish the geography.” In where the characters are and what the
other words, we have to give the audience some idea of where they are, mood of the scene is
what kind of place it is, where objects and people are in relation to each
other. Other aspects of this were discussed earlier in this chapter.
Establishing the geography is helpful to viewers to let them know the
“lay of the land” within a scene. It helps them orient themselves and pre-
vents confusion that might divert their attention from the story. There are
times when you want to keep the layout a mystery, of course. As we will
see throughout the discussion of flm grammar and editing, one of the
primary purposes is to not confuse the audience. Figures 5.7 and 5.49 are
examples of establishing shots that have value in ways beyond just show-
ing the geography—they can also establish tone, mood, and time of day.
An establishing shot, such as our ofce building example, might also
include a tilt up to an upper foor. This indicates to viewers that we are
not just seeing an ofce building, but that we are going inside. Shots of
this type are sometimes considered old-fashioned and prosaic, but they
can still be efective. Even though they give us a good deal of information,
they are still a complete stop in the dramatic action.
Many flmmakers consider it more efective if the establishing shot can
be combined with a piece of the story. One example: say we are looking
down that same bustling street and our character Helen comes into view,
rushing frantically and holding a big stack of documents; we pan or dolly
with her as she runs into the lobby and dashes to catch a departing elevator.
The same information has been conveyed, but we have told a piece of the
story as well. Something is up with Helen; all those documents are obvi-
ously something important that has put her under a great deal of stress.
Of course, in the story, Helen may already be in her ofce. One of the
classic solutions has been to combine a bit of foreground action with the
establishing shot. For example, we start with a medium shot of a sidewalk
newsstand. An anonymous man buys a paper and we can read the headline
“Scandal Disclosed,” and we then tilt to the building. What we have done
here is keep the audience in the story and combined it with showing the
building and the context. Certainly it’s a lot better than just cutting to
shooting.methods..87.
Helen and have her do some hackneyed “on the nose” dialog such as, “Oh
my god, what am I going to do about the big fnancial scandal?” Of course,
there is one more level you can add: the guy who buys the newspaper is
not an anonymous man but turns out to be the reporter who is going to
uncover the real story. These are just examples, of course, but the point
is to convey the location information in combination with a piece of the
story or something that conveys a visual idea, a sound track infection or
anything that increases our understanding of the place, the mood, or any-
thing that is useful to you as the storyteller.
A more elaborate establishing sequence is this one from Goldfnger (Figure
5.50). The opening shot is a fying banner that tells the audience they are
in Miami Beach, and the helicopter shot closes in on a beach hotel and
then into a tighter shot of a diver. We follow him down into the water
and then cut to under the water where he swims away. A crossing female
swimmer carries us back in the opposite direction where we discover Felix
Leiter, who walks away to fnd... Bond, James Bond.
INTRODUCTIONS
Figure 5.50. (above) A simple but efec- When you are bringing the viewer into a scene, you can think of it as
tive sequence that introduces the geog- bringing a stranger into a party. There are four basic introductions that
raphy and sets up the story in Goldfn-
ger need to be made: place, time, geography, and the main characters. Many
aspects of introductions and transitions are in the script, but they must be
Figure 5.51. (right, top) Opening frame actualized by the director and the cinematographer on the set. Some are
of a duel scene in Barry Lyndon
improvised at the time of shooting. For example as you’re about to shoot,
Figure 5.52. (right, bottom) A long, you see there is a full moon just above the building—a great way to start
slow zoom out results in slow disclosure
of the entire scene the scene by tilting down. It’s perfectly OK to adjust your shot list based
on new opportunities.
88..cinematography:.theory.and.practice.
Figure 5.53. (top) A dramatic and sus-
penseful introduction of the antagonist
in High Noon His arrival has been talked
about and dreaded for the entire movie
up until this point, but when he arrives
on the noon train, the director delays
showing his face until the most dra-
matic story moment As he gets of the
train, we see him only from the back
Figure 5.54. (middle) As his former girl-
friend is boarding the train to escape
the coming violence, she turns to see
him and their eyes meet
Figure 5.55. (bottom) Our frst view of
him is both a dramatic reveal and her
subjective POV: it makes for a power-
ful moment Notice how their eyelines
match; if they did not, it would not seem
like they were looking at each other In
order to match, they need to be oppo-
site: she is looking toward the left of the
frame and he is looking toward the right
side of frame
shooting.methods..89.
back to reveal where we are and what is going on. This is a variation of
the basic reveal where the camera starts on something that either moves
or the camera moves past it to show some other scene element. Stanley
Kubrick uses slow disclosure masterfully in Barry Lyndon (Figures 5.51 and
5.52). Throughout the flm, one of the key formal devices is the very long,
very slow zoom back. He starts with a telling detail of the scene and then
very deliberately pulls back to reveal more and more. As with so many
other aspects of the flm (perfectly composed fxed frames based on paint-
ings of the period, the emphasis on formal geometry), this slow pull back
underlines the rigid formalism of society and culture at that time as well
as the inevitability of Lyndon’s decline. These long pullbacks also serve as
editorial punctuation between sequences and contribute to the deliberate
pace of the flm. For these shots, Angenieux created a special lens—the
Cine-Pro T/9 24-480mm with a far greater zoom range than any other
lens available at the time. Famously, he also had an ultra-fast Zeiss F/0.7
still photo lens converted for use on a motion picture camera to shoot
almost entirely with natural light and period sources such as candelabras.
THE TIME
Beyond establishing where we are, the viewer must know when we are.
Figure 5.56. (top) Devices to convey a Internally within the scene, this is either a function of a transition shot
short passage of time are often more or other types of temporal clues. In these two frames from Ronin (Figures
difcult than demonstrating a long pas- 5.56 and 5.57) the director needed to fnd a way to establish that ffteen or
sage of time between cuts In Ronin, the
director uses the fact that the Christmas twenty minutes had passed. In the frst shot we see the bellhops starting
tree is being decorated in one shot The to put decorations on a tree in the hotel lobby. In the second shot, as the
bellhops enter with a box of decora-
tions for the tree camera pans to follow the character’s exit, we see that the decorations have
been completed. For the audience, this is completely subconscious, but it
Figure 5.57. (above) In the next shot, conveys the passage of time in a subliminal way.
the tree is fully decorated It is very
subtle but the audience will sub-
consciously register that some short THE CHARACTERS
amount of time has passed between
the two shots Introducing the characters is of course mostly a function of the script and
the actors, but a general principle is to introduce key characters in some
way that visually underlines some aspect of their importance, their nature,
and their story function. Also, making this introduction visually interest-
ing helps the audience remember the character: a form of visual punc-
tuation. We already saw a particularly elegant character introduction in
Figure 5.21 from Titanic.
For the entire frst half of High Noon, we have been waiting for the arrival
of the bad guy on the noon train. He has been discussed, feared, even run
away from. When we fnally meet him (Figures 5.53 through 5.55), as the
villain gets of the train, we do not see his face. Then for an entire sequence
of shots, we see him being greeted, strapping on his guns, and still we do
not see his face. Finally, his former lover is getting onto the train; she is
leaving town because he has come back. She turns to look at him, and it is
only then that we frst see his face. It is a dramatic and distinctive way to
introduce the character.
OTHER EDITORIAL ISSUES IN SHOOTING
In the course of shooting the scene, it is important to think about the small
shots that will help the editor put the scene together in a way that is seam-
less, logical, and also suits the tone, pacing, and mood of the sequence.
JUMP CUTS
Disruptions of continuity can result in a jump cut. It is when there seems to
be a gap between two shots, they don’t seem to fow continuously, jump
cuts can be used as an editorial technique. Jump cuts as a stylistic device
stem from Jean-Luc Godard’s flm, Breathless (Figure 5.58). Film lore is that
he shot the scenes in such a way that they could not be edited convention-
ally (such as running the camera for long periods as the characters drive
around Paris) and had to come up with a way of covering up his error.
90..cinematography:.theory.and.practice.
TYPES OF EDITS
Some aspects of editing are beyond the scope of what we deal with in day-
to-day production on the set, but directors and cinematographers must
know most of the major concepts of editorial cutting in order to avoid
mistakes and to provide the editor with the material she needs to not only
cut the flm together seamlessly, but also to control pacing and fow, and Figure 5.58. Jump cuts in Breathless, in
give the scenes and the whole piece unity and cohesion. Remember that this case, created by editing together
the frst job of the director and the DP is to deliver cuttable footage to the diferent clips from a long continuous
take Normally, a jump cut is a mistake,
editor. There are fve basic types of cuts: but Godard managed to make it into a
stylistic infection
• The action cut
Figure 5.59. (left, top) A concept cut/
• The POV cut match cut in Lawrence of Arabia Previ-
• The match cut ously, we have seen Lawrence put a
match out with his fngers to show his
• The conceptual cut indiference to pain Here, he merely
blows the match out and we hard cut
• The zero cut to the sun rising over the desert (Figure
5.56, left, bottom) It is a memorable
moment in this great flm
THE ACTION CUT
The action cut, sometimes called a continuity cut or a movement cut, is
employed whenever action is started in one shot and fnished in the next.
For example, in the frst shot we see him opening the door, and in the next
shot we see him emerging from the other side of the door. Or she reaches
across the table, then cut to her hand picking up the cup of cofee. Shoot-
ing for the action cut must be done properly if it is to work in editing.
First of all, you should always overlap the action. In this example, it would
be a mistake to have her reach for the cup, then call “cut” as soon as her
arm extends, then proceed to a close-up of her arm coming in to pick up
the shot. There is a grave danger that there will be a piece missing, which
will result in a jump cut. While shooting for an action cut, it is important
to be aware of the possibilities of the cut point. It is always best to cut “on
the action.” If someone is seated at a desk and rises to go to the door, you
want to shoot while he is rising from the chair. If the phone rings, you
want to cut to the next shot while he is reaching for the phone, and so on.
Cutting on action makes for a smoother cut and a more invisible one. The
audience is a bit distracted by the action and less likely to notice the cut.
shooting.methods..91.
Figure 5.60. (top) In this scene from
Hitchcock’s North by Northwest, Cary
Grant picks up a shipping tube and
reads the address
Figure 5.61. (bottom) We cut to the
reading insert that lets the audience
read what he is seeing Remember to
hold on the shot long enough to let the
audience read it The rule of thumb is to
maintain the shot for the time it takes
you to read it twice
Take the example of answering the phone: in the medium shot, the phone
rings, he picks it up and starts talking. Then we cut to a close-up of him
talking. In this case, it is critical that his head be in the same position and
that he is holding the phone in the same way. If any of these fail, it will be
bad continuity and will be distracting. Cutting as he reaches for the phone
helps avoid these problems.
THE POV CUT
The POV cut is sometimes called “the look” and we briefy discussed
before (Figures 5.60 and 5.61). It is one of the most fundamental build-
ing blocks of continuity and is especially valuable in cheating shots and
establishing physical relationships. A POV cut occurs anytime a look of-
screen in the frst shot motivates a view of something in the next shot. The
simplest case:
• Shot 1: A character looks of-screen and up.
• Shot 2: A view of a clock tower.
There will be no question in the mind of the viewer that we are seeing the
tower as he would see it—that is, from his point-of-view. In order to do
this, the POV shot must satisfy certain conditions.
• Direction of look. If it is to a shot of the clock tower, he has
to look up approximately the right angle. If his gaze only rises
about 10°, for example, and the clock tower is ten stories high,
it won’t work. Similarly, if we have seen in a wide shot that the
clock tower is on his right side, then his look must go to the right
as well.
• Angle of the POV. The shot of the clock tower must bear some
logical relationship to where the viewer is. If we have seen that
he is standing on the plaza right under the tower, then we cannot
cut in a shot of it as seen from a hundred yards away over the trees.
92..cinematography:.theory.and.practice.
Figure 5.62. (top) In 2001: A Space Odys-
sey, Kubrick uses an edit that is both a
conceptual cut and a match cut In the
frst frame the ape has discovered the
bone as a weapon and tool and throws
it into the air
Figure 5.63 (middle) In the next frame
we see the bone traveling through the
air
Figure 5.64 (bottom) Then there is
a match cut to a spaceship Kubrick
has not only communicated that the
original tool (a bone) has led to a world
where space travel is possible, but he
has also moved the story forward from
the past to the future in an elegant way;
no need for a clumsy title card that says
“10,000 Years Later ”
The POV cut is a classic means of cheating locations. For example, our
story is based on him seeing a murder in the building opposite him, but
unfortunately, the building opposite our location is unavailable for use.
In this case, the POV cut from him looking out the window to a POV of
his view through the window of the building across the street will sell the
concept that he can see the murder.
THE MATCH CUT
The match cut is often used as a transitional device between scenes. An
example from a western: the telegrapher sends the message that the gover-
nor has not granted a pardon; the hanging will go on as scheduled. From
the telegrapher, we go to a shot of the telegraph pole (with audio of the
clicking of the telegraph). Then from the shot of the telegraph pole we
cut to a shot of the gallows: a vertical pole the same size, shape, and in the
same position as the telegraph pole. One image exactly replaces the other
on screen. One of the most efective uses of a match cut is in 2001: A Space
Odyssey as shown in Figures 5.62, 5.63, and 5.64. It is not enough that the
objects be similar in shape: the screen size (as determined by focal length)
shooting.methods..93.
Figure 5.65. 1917 uses many invis-
ible edits to accomplish a seamless illu-
sion of a flm shot entirely in one take
Here, the main character approaches an
opening in a stone wall
Figure 5.66. The camera crossing the
wall provides a place to disguise an edit
Figure 5.67. Emerging on the other
side, the illusion is perfect
and the position in the frame must also match. One method is to have a
video of the previously shot scene and play it back on the director’s moni-
tor. For scenes requiring greater precision, a monitor with an A/B switch
allows the image to be quickly switched back and forth from the freeze
frame of the video to the live picture feed from the camera.
THE INVISIBLE CUT
The invisible edit technique is used by Sam Mendes and Roger Deakins in
1917 (Figure 5.65, 5.66, and 5.67). Although the flm seems as though
it is one continuous take, it actually contains many cuts which are very
cleverly hidden, such as the edit that takes place as the camera crosses the
stone wall. When asked how many cuts there were in the flm, editor Lee
Smith replied, “A lot more than you think.” The illusion is accomplished
in a manner than can only be called masterful.
94..cinematography:.theory.and.practice.
06
cameras
Figure 6.1. Shooting a magic hour shot THE DIGITAL SIGNAL PATH
with an Arri Alexa
Let’s take a look at how the image is acquired and recorded in modern digi-
tal cameras; just a quick overview and then we’ll get into the details later.
The lens projects the scene image onto the sensor which reacts to light
by producing voltage fuctuations proportional to the levels in the scene.
Variations in voltage are analog, so the frst process that must happen is to
convert this signal to digital. This is done by the Analog-to-Digital Converter
(ADC or the A-to-D Converter).
DIGITAL SIGNAL PROCESSOR
The data from the ADC then goes to the Digital Signal Processor (DSP)
in the camera. Now that the video image is a stream of digital code values
(bits) rather than an analog electronic signal (thanks to the ADC), the DSP
applies various algorithms which both condition the signal for further use
and also apply any modifcations that we may want. These might include
such things as color balance, gamma, color modifcation, gain and so on,
which can be controlled by switches/buttons on the camera (commonly
called the knobs) or, more commonly, menu selections in the camera con-
trols, all of which are discussed in greater depth later.
The DSP does a couple of diferent jobs, some of these operations may
include the Color Correction Matrix transform, Gamma Correction, linear-to-log
conversion and knee adjustment—all of which we will explore in more
detail. Most cameras that shoot RAW record the image without any of
these adjustments such as color balance, changes in contrast, and so on;
these adjustments are recorded separately as metadata; there are some
exceptions which we’ll get into later. By the way, RAW is not an acronym;
it doesn’t stand for anything. Also, writing it as all caps is an industry-wide
convention. RAW is something we’ll be talking about a great deal as it has
become an important part of motion picture production.
HD RECORDING
Traditional High-def cameras employ a basic scheme as shown in Figure
6.2. The digital image fow starts at the image sensor as photons from the
scene are focused by the lens onto the photo receptors (photosites). There are
several types of sensors which we will look into later in this chapter, but
all sensors work on the same principle: they convert a light image (pho-
tons) formed by the lens (optics) into an electronic signal (electrons). Video
96..cinematography:.theory.and.practice.
Figure 6.2. (top) The signal path of a
traditional video camera includes the
The HD Camera Signal Path Monitors & “knobs,” or in-camera controls of the
Viewÿnder image which make adjustments which
are then baked in to the image as it is
recorded
Figure 6.3. (below) The recording path
of a camera that records RAW—adjust-
A-to-D
Digital ments to the image are not baked in, but
Signal are recorded as metadata alongside the
Converter image
Processor
Lens Camera
Sensor
Adjustments
are “baked in”
Storage
Media
Camera Controls
(The “Knobs”)
Storage
Metadata
Media
Camera Controls
(The “Knobs”)
cameras..97.
Figure 6.4. An Arri Alexa set up for There is another change in the naming of video formats: “1080” and
camera tests In this case, the color bal- “720” refer to the height of the format in lines. The problem is that the
ance values are being adjusted in the
menus What is interesting about most height of a frame is highly variable. It was a similar problem with 35mm
modern cameras such as this is how flm. While the width of 35mm flm has remained unchanged from the
few buttons, switches, and knobs there
are Even the menus are not particularly time of Thomas Edison until today, the height of the frame has changed
complicated when compared to HD a great deal—the result of a constant hunger for “wider” formats (1.85:1,
cameras of the past 2.35:1, etc.). None of these is actually wider flm—they are all (with the
exception of anamorphic, which is an optical trick, and 70mm, which
didn’t last long as a theatrical format) the same width—what really changes
is the height of the frame.
For this reason, it makes more sense to use the width of the frame as
nomenclature, which is made easier since video no longer really consists
of scan lines (a vertical measurement only) but of pixels, which can be
quantifed as either height or width. Thus, when we talk about 2K video
(a widely used format in theatrical projection), it means a frame that is
approximately 2,000 pixels wide (actually 2,048) while 4K video is 4,096
pixels wide. This means it doesn’t get complicated (in terms of names)
when you change the aspect ratio to a taller or shorter frame. Fortunately,
the high-def era has also brought about a wider standardization of aspect
ratio: 16×9 is now universal in television sets, in broadcast, on Blu-Ray
discs, and so on.
It may be common to refer to video as 4K, but 4K is technically a digital
cinema release container format (4096×2160) whereas UHD is broadcast
format (3840×2160) with a diferent aspect ratio. It is not correct to call
broadcast video 4K, even though most people do. 8K video doesn’t really
have a name other than “8K UHD,” and UHD on its own is generally a
reference to 3840×2160.
The presentation formats (theaters, televisions, disc players, etc.) need to
be somewhat standardized—because millions of people who buy TVs and
media players, or thousands of theater owners as well as postproduction
facilities, invest in the equipment and it needs to remain more or less stable
for some time. The origination formats (cameras and editing hardware)
can be more fexible. Just as theatrical movies were nearly all projected in
35mm—they were originated on 35mm flm, 65mm, 16mm, or video.
98..cinematography:.theory.and.practice.
Figure 6.5. The relative sizes, and,
therefore, resolutions from standard def
(SD) up to 6K Ultra High-def (UHD)
High Deÿnition
Standard
6K 5K 4K 3K 2K 1080 720 Def
cameras..99.
Figure 6.6. Standard Def and High-def 4K Pixels Measured Horizontally
digital video were generally referred
to by their vertical dimension For
example, 1920x1080 (HD) video is com-
monly called “1080 ” The problem with
this is that video can be acquired and 480 Pixels Vertically
displayed with diferent vertical dimen- SD
sions to achieve diferent aspect ratios
For this reason, resolutions higher than
HD are referred to by the horizontal 1080 Pixels Vertically
dimension, such as 4K and 8K As with
many things digital, 4K is not exactly
4000, either in acquisition or display HD
4K UHD
cameras..101.
Figure 6.7. A Red Komodo on location of the data that comes of the sensors, mostly unchanged. Metadata is
(Courtesy of RED Digital Cinema) recorded at the same time and it can record artistic intentions about color,
tonal range, and so on, but metadata can be changed or just ignored later
down the line. Like a photographic negative, a RAW digital image may
have a wider dynamic range or color gamut than the eventual fnal image
format is capable of reproducing, as it preserves most of the information
of the captured image.
The purpose of RAW image formats is to save, with minimum loss of
information, data sent directly from the sensor, and the conditions of the
image capture—the metadata, which can include a wide variety of infor-
mation such as white balance, ISO, gamma, matrix, color saturation, and
so on. The metadata can also include archive information such as what lens
was used, what focal length, f/stop, time of day and (if a GPS unit is used
at the time of shooting) the geographic coordinates of the shooting, and
other slate data such as the name of the production, etc.
RAW image fles can, in essence, fll the same role as flm negatives in tra-
ditional flm-based photography: that is, the negative is not directly usable
as an image, but has all of the information needed to create a fnal, view-
able image. The process of converting a RAW image fle into a viewable
format is sometimes called developing the image, in that it is analogous
with the motion picture flm process that converts exposed negative flm
into a projectable print.
With a motion picture negative, you can always go back to the original.
Digital is changing the way we
make flms and in some ways, it is If you shot the negative right to begin with, you can make substantial
changing the language of cinema changes to the print, now or in the future. If fve years from now, you
itself decide you want to make the look of the print very diferent (within the
Dante Spinotti limits of what is possible in making a flm print, which is far more lim-
(LA Confdential, ited than what you can do with the digital image) you can do so. You
Public Enemies) can do the same with RAW; it is archival and non-destructive, and you
can manipulate the image later. Such alterations at the time of processing,
color correction, or anywhere along the line result in fewer artifacts and
degradations of the image; this includes compensating for under or over-
exposure. A drawback is that RAW fles are usually much larger than most
other fle types, which means that cameras often need to impose lossy or
lossless compression to avoid ridiculously large sizes of captured fles.
A widespread misconception is that RAW video is recorded completely
uncompressed—most cameras record it with log encoding, a type of com-
pression, but there are some exceptions. Log encoding is so important that
we devote an entire section to it later in this book. Both with flm nega-
tive and shooting RAW, it is important to remember that while you have
102..cinematography:.theory.and.practice.
a wide degree of control over the image, it is still not magic—avoid the
myth of believing that we can “fx it in post.” 4x4 PIXEL GROUP
There are many types of RAW fles—diferent camera companies use
variations on the idea. RAW fles must be interpreted and processed before
they can be edited or viewed. The software used to do this depends on
which camera they were shot with. Also, RAW fles shot with a Bayer-
flter camera must be demosaiced/deBayered (the mosaic pattern imposed 4:4:4
on the image by the Bayer flter must be interpreted), but this is a standard
part of the processing that converts the RAW images to more universal
JPEG, TIFF, DPX, DNxHD, ProRes, or other types of image fles.
One problem with RAW is that every company has their own version
of it and there is no standardization. Adobe has been trying to suggest
that the industry should come together to agree on common standards—a
common fle format to store their proprietary RAW information in a way
that wouldn’t require special apps or plugins. The Red company calls their
version Redcode RAW (.r3d), Arri calls theirs ArriRAW, Adobe uses Cin- 4:2:2
emaDNG (digital negative); ArriRAW is similar to CinemaDNG, which
is also used by the Blackmagic cameras. DNG is based on the TIFF format,
which is a very high-quality image with little or no compression. Stan-
dardization is also hampered by the fact that companies want to keep their
proprietary information secret and don’t publish the inner workings of
their RAW formats. Some Sony cameras also shoot 16-bit RAW which is
part of what they call SRMaster recording. Although formats vary from
company to company, there are some commonalities based on an ISO
(International Standards Organization) standard which includes: 4:1:1
• A header fle with a fle identifer, an indicator of byte-order and
other data.
• Image metadata which is required for operating in a database
environment or content management system (CMS).
• An image thumbnail in JPEG form (optional).
• Timecode, Keycode, etc., as appropriate.
Keep in mind that not all professional video is shot RAW. For many types
of jobs, log-encoded video is still preferred. 4:2:0
CHROMA SUBSAMPLING
Most camera sensors operate with red, green, and blue (RGB) informa-
tion. An RGB signal has potentially the richest color depth and highest
resolution, but requires enormous bandwidth and processing power and
creates huge amounts of data. Engineers realized that there is also a great Figure 6.8. The operation of various
forms of subsampling The black blocks
deal of redundant information: every channel contains both luma data (the represent the luminance channel,
black-and-white gray tone values of the pictures) and chrominance data: the which is why there is no green channel
color values of the image. Color scientists long ago discovered that most
of the information we get from an image is actually in the black-and-white
values of the picture, which is why in most situations we get almost as
much from a black-and-white picture as we do from a color picture of the
same scene—it’s just inherent in how the eye/brain works. Each channel in
an RGB video signal carries essentially the same gray tone values, so there
are three redundant black-and-white images.
Another basic fact of human vision is that a great deal of our vision is
centered largely in the green region of the spectrum. This means that the
green channel in video is somewhat similar to the luminance information.
You can try this yourself in any image processing software: take an average
photo and turn of the red and blue channels. The green channel by itself is
usually a fairly decent black-and-white photo. Now try the same with the
other channels—they are often weak and grainy by themselves.
Chroma subsampling is the name for a form of data reduction that works
with color diference signals. In this technology, the luma signal (black-and-
white brightness or luminance) is sampled at a diferent rate than the chro-
minance signal (color). Chroma subsampling is denoted as Y’CbCr. Y’ is
cameras..103.
Figure 6.9. Camera assistants prepare the luma component while Cb and Cr are the color diference signals. Y
two cameras for a day’s shooting Note represents luminance, which is actually a measure of lighting intensity, not
that they are labeled cameras “A” and “B”
and diferent colored tape is used—this video brightness levels, while Y’ is luma, which is the weighted sum of the
helps keep accessories and cases sorted red, green, and blue components and is the proper term to use in this con-
out Camera “A” has a digital rangefnder
mounted above the lens—it looks like text. “Weighted” as it refers to luma means that it is non-linear.
a set of miniature binoculars (Photo You will notice that there is no Cg or green channel. It is reconstructed
courtesy John Brawley)
from the other channels. Green doesn’t need to be sent as a separate signal
since it can be inferred from the luma and chroma components. The edit-
ing, color correction software, or display device knows the distribution of
the luminance gray tones in the image is from the Y’ component. Crudely
put—it knows how much of the image is blue and red, it fgures the rest
must be green. It’s quite a bit more complicated than this, but that’s the
basic idea.
For luma (grayscale values), the engineers chose a signal that is 72% G,
21% R, and 7% B, so it’s mostly comprised of green, but it’s a weighted
combination of all three colors that roughly corresponds to our own per-
ception of brightness. To simplify a bit, the color information is encoded
as B-Y and R-Y, meaning the blue channel minus luminance and the red
channel minus luminance. This is called color diference encoded video—the
Cb and Cr. This method of encoding is sometimes called component video;
it reduces the requirements for transmission and processing by a factor of
3:2.
Because the human visual system perceives diferences and detail in color
much less than in gray scale values, lower-resolution color information
can be overlaid with higher-resolution luma (brightness) information, to
create an image that looks very similar to one in which both color and
luma information are sampled at full resolution. This means that with
chroma subsampling, there can be more samples of the luminance than for
the chrominance. In one widely used variation of this, there are twice as
many luma samples as there are chroma samples, and it is denoted 4:2:2,
where the frst digit is the luma channel (Y’) and the next two digits are the
chroma channels (Cb and Cr)—sampled at half the rate of the luminance.
Video that is 4:4:4 has the same chroma sampling for color channels as
for luminance. There are other variations—for example, Sony’s HDCam
104..cinematography:.theory.and.practice.
cameras sample at 3:1:1. You may occasionally see a fourth digit, such as
4:4:4:4; in this case the fourth number is the alpha channel, which contains
transparency information. There are others as well, such as 4:2:0—see
Figure 6.8 for a visual representation of these varieties. For our purposes,
we can say that a 4:4:4 signal has more data. In any case, a 4:4:4 signal is
going to be better in color depth and possibly in resolution as well—with
the proviso that as always, it requires more processing power and storage.
Some widely used chroma subsampling schemes are listed here. There are,
of course, more variations than we can go into here.
4:4:4—All three components are sampled at 13.5 MHz, meaning there
is no compression of the chroma channels; however, the signal might still
be compressed in other ways.
4:2:2—Four samples of luminance associated with two samples of Cr,
and two samples of Cb. The luminance sampling rate is 13.5 MHz; color
component rates are 6.75 MHz.
4:1:1—The luminance sample rate here is still 13.5 MHz, but the chro-
minance sample rate has dropped to 3.375 MHz.
4:2:0—This is like 4:2:2, but doing what’s called vertically subsampled
chroma. The luminance sampling rate is 13.5 MHz, and each component
is still sampled at 6.75 MHz, only every other line is sampled for chromi-
nance information.
PIXELS
A pixel is not a fxed size; it is a point value that is mapped onto physical
elements in a display. For example, the same images might have very small
pixels when displayed on a computer monitor, but the same pixels will
be quite large when projected on a theater screen. The reason we don’t
perceive these larger pixels as visible elements is viewing distance—in the
theater, the viewer is much farther away.
RESOLUTION
The resolution of a device (such as a digital monitor or camera sensor) is
sometimes defned as the number of distinct pixels in each dimension that
can be displayed or captured. In some cases, this is expressed in megapixels,
at least in the world of digital still cameras (DSLRs)—a megapixel being
one million pixels. The term megapixels is almost never used in discussing
video cameras.
However, pixel count is not the only factor in determining resolution;
contrast is also an important element. Image processing software devel-
oper Graeme Nattress says this in his paper Understanding Resolution: “Our
overall perception of detail depends not just on the fnest image features,
but also on how the full spectrum of feature sizes is rendered. With any
optical system, each of these sizes is interrelated. Larger features, such as
the trunk of a tree, retain more of their original contrast. Smaller features,
such as the bark texture on a tree trunk, retain progressively less contrast.
Resolution just describes the smallest features, such as the wood grains,
which still retain discernible detail before all contrast has been lost.” He
adds, “Resolution is not sharpness! Although a high resolution image can
appear to be sharp, it is not necessarily so, and an image that appears sharp
is not necessarily high resolution. Our perception of resolution is intrinsi-
cally linked to image contrast. A low contrast image will always appear
softer than a high contrast version of the same image.”
PHOTOSITES
Digital cameras, whether designed for video, still photos, or both, use
sensor arrays of millions of tiny photosites in order to record the image.
Photosites are photon counters—they react to the amount of light hitting
them and output voltage proportionate to that. They sense only photons,
which have no color (they produce color by afecting the cones of our
eyes at diferent wavelengths) and by the same token photosites have no
“color.”
cameras..105.
PIXELS AND PHOTOSITES ARE NOT THE SAME THING!
It is easy to think that photosites are the same thing as pixels but they are
not; the process is a bit more complicated than that. The outputs from
photosites are collected together, unprocessed, to form camera RAW
images. In most sensors, outputs from adjoining photosites are combined
in ways that vary between manufacturers. Pixels are the processed result of
the data from photosites. In a display (such as a monitor) pixels are com-
posed of sub-pixels—red, green, and blue, which can be varied in intensity
to form a wide range of colors.
DIGITIZING
The key elements of digitizing are pixels-per-frame, bits-per-pixel, bit rate,
and video size (the size of the fle for a given time frame). Digitizing is the
process of converting analog information (such as from the video sensor)
into digital bits and bytes.
Digitizing involves measuring the analog wave at regular intervals: this
is called sampling and the frequency of the interval is called the sampling
rate (Figures 6.11 and 6.12). As you can see, if the number of samples per
video line is low, the sampling is very crude and doesn’t give a very accu-
Figure 6.10. The classic Spectra Pro- rate representation of the original signal. As the frequency of the sampling
fessional light meter (with the dome increases, the digital conversion becomes more accurate. The actual sam-
removed for illustration) is basically a
single photosite sensor—it produces pling rate that is used is generally at least twice that rate. The reason for
voltage when light hits it this is the Nyquist Theorem.
NYQUIST LIMIT
The Nyquist Limit or Nyquist Theorem is a term you will hear mentioned
frequently in discussions of digital sensors and video data. No need for us
to go into the mathematics of the theorem here, it states that the sampling
rate of an analog-to-digital conversion must be double the highest analog
frequency. In video, the highest frequencies represent the fne details of a
sampled image. If this isn’t the case, false frequencies may be introduced
into the signal, which can result in aliasing—the dreaded stair-step, jagged
efect along the edges of objects. Some people mistakenly think they need
twice as many pixels as resolution. But because sampling theory refers to
the frequency of the video image, which is made up of a line pair, you need
as many samples as lines, not twice as many samples as lines.
OLPF
Optical Low Pass Filters (OLPF) are used to eliminate moiré interference
patterns, so they are sometimes called anti-aliasing flters. They are made
from layers of optical quartz and usually incorporate an IR (infrared) flter
as well, because silicon, the key component of sensors, is most sensitive
to the longer wavelengths of light—infrared. The reason a sensor will
create moiré is primarily due to the pattern of photosites in a Color Filter
Array—although all types of sensors (including black-and-white) are sub-
ject to aliasing. When a photograph is taken of a subject that has a pattern,
each pixel is exposed to one color and the camera calculates (interpolates)
the information that is remaining. The small pattern of photosite flters is
what causes moiré—details smaller than the pixels are interpreted incor-
rectly and create false details—they spread the information out so details
that might fall between photosites cover more than one photosite so fne
detail doesn’t fall between the cracks.
DIGITAL SENSORS
At this time, the dominant technologies used in cameras are CCD and
CMOS, although we can expect many innovations in the future, as all
camera companies constantly engage in research in this area. Both types
of sensors have advantages and disadvantages in image quality, low-light
performance, and cost. Since their invention at Kodak in the early 1970s,
digital sensors have steadily improved in all aspects of their performance
and image reproduction as well as lower costs of manufacturing.
106..cinematography:.theory.and.practice.
Figure 6.11. (left, top) Digitizing at
a low sample rate gives only a rough
approximation of the original signal
Figure 6.12. (left, below) A higher
sample rate results in digital values that
are much closer to the original signal
CCD
CCD stands for Charge Coupled Device. A CCD is essentially an analog
device—a photon counter; it converts photons to an electrical charge and
converts the resulting voltage into digital information: zeros and ones. In
a CCD array, every pixel’s charge is transferred through a limited number
of output connections to be converted to voltage and sent to the DSP as an
analog electrical signal (voltage). Nearly all of the sensor can be devoted to
light capture, and the output’s uniformity is fairly high. Each pixel is actu-
ally a MOSFET—Metal Oxide Semiconductor Field Efect Transistor.
So how do all those millions of pixels output their signal through a rela-
tively few connector nodes? It’s a clever process that was conceived at the
very inception of the digital image sensor. Working at AT&T Bell Labs,
George Smith and Willard Boyle came up with the idea of a shift register.
The idea itself is simple; it’s like a bucket brigade: each pixel registers its
charge and then passes it to the next pixel, and so on down the line until it
reaches the output connection.
FRAME TRANSFER CCD
A disadvantage of the shift register design is that after the exposure phase,
during the readout phase, if the readout is not fast enough, erroneous data
can result due to light still falling on the photosites; this can result in verti-
cal smearing of a strong point light source in the frame. Also, the sensor
is basically out of action as an image collection device during readout.
A newer design that solves these problems is the frame transfer CCD. It
employs a hidden area with as many sites as the sensor. When exposure fn-
ishes, all of the charges are transferred to this hidden area and then readout
can occur without any additional light striking the photosites, and it also
frees up the sensing area for another exposure phase.
cameras..107.
CMOS
CMOS stands for Complementary Metal Oxide Semiconductor. In general,
CMOS sensors are lower in cost than CCDs because the manufacturing
process is simpler and there are fewer components involved. They also tend
to use less power in operation and are capable of higher rates of readout.
3-CHIP
Since the early days of television up to the introduction of UHD, most
video cameras used three separate sensors (usually CCD) for the red, green,
and blue components. This arrangement is capable of very high-quality
images. Since there is only one lens, the image needs to be split into three
parts, which is accomplished with prisms and dichroic flters as shown in
Figure 6.18. It is critical that all three light paths be the same length so that
they focus at the same plane. The sensors must be precisely aligned so that
they line up properly when combined into a single color image. By record-
ing primary color readings of red, green, and blue to separate chips, the
3-sensor design produces the high quality, precise colors possible. Single-
chip cameras often interpolate missing RGB information—lowering color
accuracy and degrading the overall resolution.
MAKING COLOR FROM BLACK-AND-WHITE
Photosites are unable to distinguish how much of each color has come
in, so they can really only record the number of photons coming in, not
their wavelength (color). To capture color images, each photosite has a
Figure 6.13. (top) Arrangement of a flter over it which only allows penetration of a particular color of light—
Bayer flter on a sensor
although this diferentiation can never be absolute, there is always some
Figure 6.14. (above) The most sig- overlap. Virtually all current digital cameras can only capture one of the
nifcant aspect of the Bayer flter is two three primary colors in each photosite, and so they discard roughly 2/3 of
green photosites for each red and blue
This is a typical arrangement only; some the incoming light, this is because flters work by rejecting wavelengths
camera companies arrange them difer- that don’t match the flter. As a result, the processing circuits have to
ently approximate the other two primary colors in order to have information
about all three colors at every pixel.
BAYER FILTER
The most common type of color flter array is called a Bayer flter, or Bayer
flter color array. Invented by Dr. Bryce Bayer at Kodak, it is just one type
of CFA (color flter array) (Figures 6.15 and 6.16), but it is by far the most
widely used in cameras. The sensor has two layers:
• The sensor substrate which is the photosensitive silicon material,
it measures the light intensity and translates it into an electrical
charge. The sensor has microscopic cavities or wells, which trap
the incoming light and allow it to be measured. Each of these
wells or cavities is a photosite.
• The Bayer flter is a color flter array that is bonded to the sensor
substrate. The sensor on its own can only measure the number of
light photons it collects; since photosites can’t “see” color, it is the
flters that diferentiate the wavelengths that result in color.
MICROLENS
It is still impossible to make the photosites sit precisely next to each other—
inevitably there will be a tiny gap between them. Any light that falls into
this gap is wasted light. The microlens array (as shown in Figure 6.18) sit-
ting on top of the sensor aims to eliminate this light waste by directing the
light that falls between two photosites into one of them or the other. Each
microlens collects some of the light that would have been lost and redirects
into the photosite.
DEMOSAICING/DEBAYERING
There is one obvious issue with Bayer flters or anything like them: they
are a mosaic of color pixels—not really an exact reproduction of the origi-
nal image at all. Demosaicing, also known as deBayering, is the process by
108..cinematography:.theory.and.practice.
which the image is put back together into a usable state. In this process, Figure 6.15. Red, green, and blue pho-
the color values from each pixel are interpolated using algorithms. Figure tosites of a typical single-chip, color
flter array sensor
6.16 shows a simulation of what an undeBayered image looks like. Some
software applications allow limited control over this stage of the process.
For example, in RedCine-X Pro, you can choose from 1/16, 1/8, 1/4, 1/2
Good, 1/2 Premium, or full deBayer. This selection is made as you dial in
your export settings. So, what quality of deBayer is sufcient to produce
an image that is best for your purposes?
According to Graeme Nattress: “Full deBayer is necessary to extract 4K
resolution for 4K footage. It also makes for the best 2K when scaled down.
However, if you’re going direct to 2K, the half deBayer is optimized to
extract to the full 4K and scale down to 2K in one step, and hence is much
quicker. If you’re just going to 2K, then the half is fne, but you may get a
percent or two more quality going the full debayer + scale route.” (Graeme
Nattress, Film Efects and Standards Conversion for FCP). In RedCine-X Pro,
there is also a deBayer selection named “Nearest Fit.” This setting auto-
matically selects the deBayer setting that is closest to the output resolution
you select for that export. DP Art Adams puts it this way: “A good rule
of thumb is that Bayer pattern sensors lose 20% of their resolution right
of the top due to the deBayering algorithm blending colors from adjacent
photosites into distinct pixels.”
Pixel binning is a process used to combine the charge collected by several
adjacent CCD pixels, and is designed to reduce noise and improve signal-
to-noise ratio and frame rate of digital cameras. The binning process is
performed by a timing circuit that assumes control of the shift registers
prior to amplifcation of the CCD analog signal. Pixel-binning sees data
from four adjacent pixels combined into one summed pixel. A sensor with
0.9 micron pixels can produce results equivalent to 1.8 micron pixels.
Think of the pixels (photosites) as buckets collecting. You can either have
lots of small buckets, or several large buckets. The process of pixel-binning
has the efect of combining all the small buckets into one very large bucket
when needed. The downside of this process is that the resolution is efec-
tively divided by four when taking a pixel-binned shot. So that means
a binned shot on a 24 megapixel camera is actually 6 megapixels, and a
binned shot on a 16 megapixel camera is only four megapixels.
COLOR INTERPOLATION
The most striking aspect of the basic Bayer flter is that it has twice as
many green sites as it does red or blue. This is because the human eye is
much more sensitive to green light than either red or blue, and has a much
greater resolving power in that range.
Clearly it is not an easy task to make a full-color image if each photosite
can only record a single color of light. Each photosite is missing two-thirds
of the color data needed to make a full-color image; also, the flters cannot
do an absolutely precise job of separating the colors, so there is always
going to be some “bleed” and overlap, but most cameras do a surprisingly
good job of color interpretation.
cameras..109.
Figure 6.16. An image with a Bayer
flter arrangement superimposed
The methods used for deBayering are quite complex. In very simplifed
terms, the camera treats each 2×2 set of photosites as a single unit. This
provides one red, one blue, and two green photosites in each subset of
the array, and the camera can then estimate the actual color based on the
photon levels in each of these four photosites.
Figure 6.14 is an example of a 2×2 square of four photosites; each pho-
tosite contains a single color—either red, green or blue. Call them G1,
B1, R1, G2. At the end of the exposure, when the shutter has closed and
the photosites are full of photons, the processing circuits start their cal-
culations. If we look at the demosaicing of each 2×2 square, here’s what
goes on: for the pixel at G1, the green value is taken from G1 directly,
while the red and blue values are inferred from the neighboring R1 and B1
photosites; in the simplest case, those photosites’ values are simply used
directly. In more sophisticated algorithms, the values of multiple adjacent
photosites of the same color may be averaged together, or combined using
other mathematical formulae to maximize detail while keeping false color
artifacts to a minimum.
Based on the Bayer pattern, if the photosite in the center is green, the
surrounding photosites will be made up of two blue photosites, two red
photosites, and four green photosites. If it is a red photosite in the center,
it will have four blue photosites and four green photosites around it. If it
is a blue photosite in the center, it will be surrounded by four green and
four red photosites. In general, each photosite is used by at least eight other
photosites so that each can create a full range of color data. This descrip-
tion is a typical example only, the method used for this color interpola-
tion is proprietary to each manufacturer and is thus a closely held secret.
Improvements in this process are a big part of how digital cameras keep
getting better and better, along with improvements in sensors, compres-
sion, and other factors, including postproduction processing.
WHAT COLOR IS YOUR SENSOR?
So what is the color of a digital sensor? The short answer is that, unlike
flm stock, which comes in either daylight or tungsten balance, camera
sensors don’t have a “color balance.” Yes, all cameras allow you to choose a
preferred color temperature or to automatically “white balance” to adjust
to daylight, tungsten, overcast, fuorescent, etc., but these are electronic
corrections to the signal—they don’t actually afect the sensor itself.
On some cameras, this color correction is “baked in” and is a permanent
part of the image. As we know, cameras that shoot RAW may display the
image with corrected color balance, but in fact, this is just the metadata
afecting the display—the RAW image remains unafected and any desired
color balance can be selected later in software and only needs to be baked
in at a later time (there are some exceptions to this, as we’ll see). How-
ever, that being said, some camera sensors do have a color balance that they
110..cinematography:.theory.and.practice.
Figure 6.17. The Blackmagic URSA Mini
Pro 12K camera (Photo courtesy Black-
magicdesign)
are optimized for—this is called their “native” color balance, just as their
“built in” sensitivity is referred to as their native ISO. Some are known to
have slightly diferent responses at various color temperatures. What is the
native ISO or white balance of a sensor is generally determined by what
setting requires the least amount of gain (electronic amplifcation) to be
applied. This results in the least noise.
For example, the native color balance for the Dragon sensor used in some
Red cameras is 5,000 degrees Kelvin, but, of course, can be electroni-
cally compensated for any color temperature in the range 1,700 to 10,000
Kelvin—as we know, this is recorded in the metadata when shooting
RAW images. White Balance presets are available for Tungsten (3200K)
and Daylight (5600K) lighting; the camera can also calculate a color neu-
tral White Balance value using the standard technique. Canon takes the
approach that color balance is baked in at the time of shooting.
HOW MANY PIXELS IS ENOUGH?
Counting how many pixels a sensor has can get a bit complicated. You
might think that a sensor listed as 2 MP would have 2 megapixels for each
channel: red, green, and blue. In most cameras, this is not the case. Simi-
larly, for cameras with Bayer flters, there are twice as many green pho-
tosites as there are red, or blue—how is this counted? Each camera com-
pany has made choices as to how to come up with a total count; there is no
industry-wide standard. Unlike digital still cameras, the megapixel count
is rarely used when discussing the camera. Instead, the number of pixels
measured across the horizontal axis is used—1920, 2K, 4K, 5K, etc.
SHOOTING RESOLUTION
Some cameras can now shoot at 6K, 8K, and beyond (Figures 6.17, 6.19,
and 6.22). There are currently no projectors for this type of footage, so
why? It can be thought of as “oversampling.” One popular use for this
larger format is to shoot a frame larger than is intended for the fnal output,
this leaves some room on the edges for repositioning, steadying shaky
footage and other uses. David Fincher used this technique on Se7en, for
example, to make adjustments to the frame in post. In flm, it is common
practice to shoot larger formats such as 65mm or VistaVision for visual efects
shots. This ensures there is no degradation of quality in postproduction.
cameras..111.
Figure 6.18. (above) The jello efect
caused by a rolling shutter distorts the
fast-moving blades of this helicopter
(Photo courtesy Jonen)
Figure 6.19. (right) The Red Epic-W with
the Helium 8K sensor (Photo courtesy
Lensrentals com)
SHUTTERS
Shutters are always necessary in any type of photography, either still or
motion. If the flm or sensor was always exposed to the light of the image,
it would record the movement in the scene all the time, meaning that the
shot would be blurred. Also, there would be much less control over expo-
sure—light would be constantly fooding onto the sensor.
SPINNING MIRROR
Motion picture flm cameras almost universally use a spinning mirror. In
its most basic form, it is a half-circle (180°) so as it spins, when the mirror is
out of the light path, light reaches the flm and the image is projected onto
the flm emulsion. When the mirror rotates into the light path, it refects
the image up into the viewfnder. While the optical path is closed of by
the shutter, the camera advances the flm to the next frame. If the flm was
moving while light was coming in, the result would be a total blur.
ROLLING SHUTTER AND GLOBAL SHUTTER
Only a few digital cameras have this rotating mirror shutter; since the
video sensor doesn’t move between frames as flm does, it isn’t really nec-
essary. This results in smaller and lighter cameras, but it does have a draw-
back—video sensors don’t necessarily expose the entire frame all at once.
Instead, they scan the image up to down. For an object that is moving,
it will have moved in the time between when the top of the frame was
recorded and shutter closes.
This can have several negative results, including the jello efect—a smear-
ing of the moving object (Figure 6.18). There are some postproduction
software fxes for this, but of course, it is always better to avoid the need
for this sort of repair. One approach to preventing this problem is to add
a rotating shutter to the camera like the ones used in flm cameras. This
has the added beneft of providing an optical viewfnder instead of a video
one. In video, what is called the shutter is not generally a physical device;
it is how the pixels are “read.” In general, CCDs have a global shutter and
CMOS sensors have rolling shutters. A global shutter controls incoming light
to all the photosites simultaneously. While the shutter is open, the sensor
is collecting light, and after the shutter closes, it reads the pixel charges and
gets the sensor ready for the next frame. In other words, the CCD captures
the entire image at the same time and then reads the information after
the capture is completed, rather than reading top to bottom during the
exposure. Because it captures everything at once, the shutter is considered
global. The result is an image with no motion artifacts. A rolling shutter, on
the other hand, is always active and “rolling.” It scans each line of pixels
from top to bottom, one at a time, which can result in the jello efect.
112..cinematography:.theory.and.practice.
Figure 6.21. Erik Messerschmidt used a
Red Monstro with the Helium 8K sensor
to shoot these day-for-night scenes in
Mank
cameras..113.
Figure 6.21. A Panansonic GH5S on inevitably results in a picture that is degraded, although camera makers
location DSLRs that shoot UHD are a have made huge strides and in most cases, small increases in gain still result
popular choice for productions with
limited budgets, and are often used as in usable images. However, gain is to be avoided if possible.
additional cameras on larger shoots As the Red camera company puts it, “With Red, the native ISO speed
(Photo courtesy BorrowLens com)
describes the recommended starting point for balancing the competing
trade-ofs of noise in the shadows and clipping in the highlights. This does
not necessarily refect anything intrinsic to a camera sensor itself or the
performance of cameras by diferent companies. It is a refection of the
sensor design, signal processing, and quality standards all in combination.”
With cameras that shoot RAW, these changes to ISO are recorded as meta-
data and the real change in exposure takes place in postproduction. High-
end Canon cameras are slightly diferent in this regard as we discussed ear-
lier. DP Art Adams says this about native ISO for his own uses: “What I
do is look at the noise foor on a waveform monitor. I have some idea of
how much noise I like, and I can judge noise pretty well just by looking at
the thickness of the trace on a Leader or Tektronix waveform monitor (see
Figures 6.23 and 6.24).
“When I fnd an ISO that gives me the width of trace that I want I verify
the noise level by putting a lens on the camera, underexposing the image
and looking at the shadow noise. If I’m happy with that level of noise,
then that’s my new ISO. Where ISO is placed is based entirely on noise
level and personal taste. The gain coming of the sensor may seem to indi-
cate it is ‘natively’ 160, or 200, or 320, but if the signal is quiet enough you
can put middle gray wherever you want by changing the ISO and seeing if
you like the noise level.”
DP David Mullen poses this question: “As sensors and signal process-
ing gets quiet enough, the issue of ‘native’ sensitivity matters less than
what is the highest ISO rating you can give it before the noise becomes
objectionable. The [Red] Dragon sensor is quieter than the [Red] MX
sensor so it can be rated faster than the MX sensor, but since there is more
highlight information at low ISOs compared to the MX, it probably has a
lower ‘native’ sensitivity. So should a sensor that can be rated faster with
less noise be called less sensitive? Sort of makes you question what ‘native
sensitivity’ actually means.”
114..cinematography:.theory.and.practice.
NOISE Figure 6.22. The Canon 8K EOS digital
camera with 8K conversion box (Photo
Noise is not the same as flm grain—some people make the mistake of courtesy Canon USA)
trying to imitate the look of flm by introducing video noise. There is
always going to be some noise present in any electronic device—it’s just a
consequence of the physics involved. It is especially a problem at the low
end (darkest areas of the image). This is because as the actual signal (the
image) gets lower in brightness, it becomes indistinguishable from the elec-
tronic noise. As we’ll see in the chapter Exposure, some cameras are capable
of giving warning signs when parts of the image are “in noise.” Since noise
is most visible in the darker areas of the image there are two consequences:
frst, an underexposed image will be noisy ( just as happens with flm nega-
tive) and second, at some point the level of noise overwhelms the detail in
the image. Beyond this inherent background, the primary cause of noise
becoming visible in the image is gain—electronic amplifcation.
This can be a result of using a higher ISO, underexposing and compen-
sating later, or even by color compensation in white balance, which may
result in certain color channels being amplifed more than others. Charles
Poynton, color scientist, consultant, and writer says, “Sensitivity and noise
are two sides of the same coin. Don’t believe any specifcation that states
one without the other. In particular, don’t believe typical video camera
specifcations.” A standard measure of noise is the Signal-to-Noise ratio
(abbreviated S/N or SNR), which is usually expressed in deciBels (dB). As
DP Art Adams succinctly puts it—“When it comes to sensors, you don’t
get anything for free.”
IR AND HOT MIRROR FILTERS
Some cameras (mostly early Reds) require an additional IR (Infrared) or
Hot Mirror flter. Sensors can sometimes see things humans can’t perceive,
such as an excessive amount of infrared. In normal circumstances, IR is a
small proportion of the light hitting a sensor. In low-light situations, this
amount of infrared at the sensor is overpowered by the rest of the spec-
trum and isn’t a problem (Figure 6.25).
cameras..115.
Figure 6.23. (top) A frame from a Red Unfortunately, when shooting outdoors, Neutral Density (ND) flters
camera at ISO 200 Minimal noise is are usually necessary to reduce the amount of light to a manageable level.
shown at the bottom of the display,
especially in the far left part of the Although ND flters are designed (ideally) to introduce no coloration
waveform monitor—in the black and (that’s why they are neutral) they usually do allow infrared light to pass.
near black regions of the frame, the
waveform trace is very thin For more on Unfortunately, despite the best eforts of flter makers and some highly
understanding waveform monitors, see advanced production techniques, few ND flters are truly 100% neutral.
the chapter Measuring Digital
Testing before you shoot is advised.
Figure 6.24. (above) The same shot with Manufacturers of flm stocks make very low sensitivity (as low as 50 ISO)
the ISO cranked up to 2000 on the Red
The waveform trace in the black regions which are useful for day exteriors. Digital cinema cameras rarely go this
is now a very thick line, indicating lots low. Traditional HD cameras often had an ND flter wheel and a color flter
of noise in the darkest areas Some wheel to accommodate diferent lighting color situations and light levels.
noise is also visible in the other parts of
the frame; shown by a slight fuzziness While ND flters work fne for reducing the amount of light to a work-
throughout the waveform trace There able level, they have a drawback—they don’t afect infrared equally. The
is noise throughout the frame, it just
shows up more in the darkest parts of result is that the proportion of visible light compared to IR is changed; the
the shot ratio of IR is higher than normal. This can result in red contamination of
This is an enlarged portion of the origi- some colors. IR flters and hot mirror flters are similar in their efect but
nal frame and even so, the noise is not exactly the same thing. A hot mirror is a dichroic flter that works by
barely visible This illustrates the useful- refecting infrared light back, allowing visible light to pass.
ness of the waveform monitor in check-
ing for noise On a large monitor and The choice of an IR flter is camera-specifc, as diferent cameras have
certainly on a cinema screen, the noise diferent IR sensitivities. Cameras with built-in NDs typically have flters
will be visible
that are designed with that camera’s IR response in mind. That doesn’t
mean that IR fltration is never necessary for cameras with internal NDs,
116..cinematography:.theory.and.practice.
or when shooting without NDs: some cameras show noticeable IR pollu- Figure 6.25. This flter test by Tifen
tion (especially under IR-rich tungsten light) even with no ND. As always, shows signifcant infrared pollution on
the left using an ordinary 9 Neutral Den-
it’s worth testing before an important shoot with an unfamiliar camera. A sity flter Note especially the reddish IR
reddish color on black fabrics is often a telltale sign, but not always, some contamination in the cloth samples in
the center with the standard ND flter
cameras show IR pollution more in the blue channel. Figure 6.25 shows The right-hand side shows the efect of
on the left side, the efect of using a normal Neutral Density flter—skin their IRND Neutral Density, which is also
tones are noticeably redder and the color contamination in the cloth sam- combined with, in this example, a Black
Pro-Mist 1/2 (Photo courtesy The Tifen
ples is very apparent. On the right side of the frame, the problem is solved Company)
by using a Tifen IRND. Internal ND flter wheels can be calibrated to be
near perfectly neutral for that particular sensor.
Bit rate is the measure of how many bits can be transferred per second.
Higher bit rates generally mean better quality images but also much larger
fles. While media storage is measured in bytes (B) data rates are measured
in bits-per-second (small b).
BIT DEPTH
Bit depth is not the same thing as bit rate. Depth refers to how many bits
of data are recorded per pixel. By way of example, most consumer video
equipment is 8 bit, while high-end professional equipment may be 12 or
14 bits per pixel. This gives you more to work with and has huge implica-
tions for workfow, ability to achieve the image you want to, and issues
such as dynamic range and color accuracy, but it also means that you have
a lot more data to deal with. One thing to watch out for is that bit depth
is counted in two diferent ways. One of them is bits total and the other is
bits-per-channel, which includes the Red, Green, and Blue channels and, in
some cases, also the Alpha channel, which is transparency data.
FRAME RATES
So flm is 24 frames per second and video is 30, right? It would be great if
the world was that simple, but, unfortunately, it’s not. Yes, flm has been
shot at 24 frames per second since about 1929. Before that it was more
commonly shot at from about 14 to 18 FPS, but since cameras were mostly
hand cranked before then, it was really just an approximation. It was the
introduction of sync sound that brought about standardization: in order
for sound to be synchronized, cameras and projectors have to be running
at a constant rate. Thomas Edison maintained that 46 FPS was the slowest
frame rate that wouldn’t cause eye strain. In the end 24 FPS was chosen as
the standard. It was not, as is often said, for perceptual or audio reasons.
Interestingly, some flmmakers now believe that 48 FPS is the ideal frame
rate, while others advocate rates of 60 FPS and even 120 FPS.
cameras..117.
Figure 6.26. The Red Monstro 8K When video was invented, 30 FPS was initially chosen as the standard
camera (Photo courtesy Red Digital rate, to a large extent because the power supply of the US runs at a very
Cinema)
reliable 60 hertz (cycles per second or Hz) and electronic synchronization
is a fundamental part of how video works both in cameras and on TV
sets. In Europe, 25 FPS was chosen as the standard because the electrical
systems run at 50 Hz.
Running video at 30 FPS soon ran into problems, it turned out with
color that there was interference with the subcarrier signal. To solve the
problem, engineers shifted the frame rate to 29.97. As a result of this,
when 24 FPS flm is converted to video, the actual frame rate turns out to
be 23.976. Some people make the point that 23.98 (the rounded-of value)
is not the same as 23.976 in that some software applications handle them
in slightly diferent ways.
THE FILM LOOK VS. THE VIDEO LOOK
Film at 24 FPS has a distinctive look that most people consider to be cin-
ematic in appearance; on the other hand, 30 FPS has a “video” look. The
primary reason for this is that we have become accustomed to 24 frames
per second. In reality, when we see flm projected, we are seeing 48 FPS.
This is because projectors have a blade which interrupts each frame and
makes it the equivalent of two frames—it appears that Edison was right
after all. Film also has an impressionistic feel—it doesn’t actually show us
every detail. For this reason, it has been the practice for some time in video
to shoot dramatic material at 24 FPS and sports at 30 FPS (actually 23.98
or 29.97).
118..cinematography:.theory.and.practice.
07
measuring digital
Figure 7.1. (top) Traditional SMPTE 75%
color bars as they are seen on a video
monitor The primary and secondary
colors are arranged in order of bright-
ness In the bottom row, second patch
from the left is a 100% white patch
Figure 7.2. (bottom) The SD color bars
as seen on a waveform monitor, which
measures the voltage of the digital
signal Also on the lower row at the
right on the color bars is the PLUGE,
which can be clearly seen on the wave-
form signal Notice how the top line of
colors nicely stair step down in bright-
ness level
measuring.digital..121.
Figure 7.6. (above) The traditional
SMPTE color bars Lower right between
the two pure black patches is the PLUGE
The I and +Q patches are from the now
disused NTSC color system Although
Standard Def is deader than disco, you
will still see these color bars and it’s
useful to understand them The values
are in IRE (Institute of Radio Engineers)
where 0 is pure black and 100 is pure
white Although they technically do not
apply to HD/UHD video, many people
still refer to IRE values
Figure 7.7. (right) The PLUGE is valuable
for monitor calibration Between two
pure black patches are sections that are TYPES OF DISPLAY
at 3 5, 7 5, and 11 5 IRE Most waveform/vectorscope units and software displays are capable of
showing the signal in several formats; all of which have their own uses
at some point or other in the production and postproduction workfow.
Some units can show several types of display at the same time, and this is
user selectable to adapt to various situations and needs (see Figure 7.3, 7.4,
and 7.5). Diferent technicians have their own favorites for each situation.
LUMINANCE/LUMA
The most basic of displays is a trace of the luminance/brightness/expo-
sure of the picture (Figure 7.2). For setting exposure levels this is often
the quickest to use. Pure luminance shows only the Y (luminance) levels;
some turn it of when exposure is the primary concern. Since it is capable
of showing both luminance and chrominance (C), it is called Y/C display.
Luminance-only display may not show when an individual color channel
is clipping. For reasons more technical than we need to go into here, it is
properly called luma—Y’.
122..cinematography:.theory.and.practice.
OVERLAY Figure 7.8. (top) UHD color bars are
substantially diferent from HD bars
The Overlay display on a waveform monitor shows the red, green, and blue They include a Y-ramp (luminance from
traces but overlaid on each other. To make this readable, the traces are 0% to 100%) as well as 40% and 75%
gray patches The PLUGE is diferent as
color coded to represent each channel in its own hue (Figure 7.4). well in that it is measured in percent-
age instead of IRE Technically IRE values
RGB PARADE don’t apply to HD signals This set of
color bars was developed by ARIB, the
Parade view shows the luminance of the red, green, and blue components Association of Radio Industries and Busi-
nesses, a Japanese industry group They
shown side by side (hence parade). Many technicians say that they “make have been standardized as SMPTE RP
their living with parade display.” Its value is obvious: rather than just 219-2002
showing the overall luminance levels of the frame, it shows the relative Figure 7.9. (above) Anatomy of the
values of the diferent color channels; this means that judgments can be ARIB/SMPTE HD color bars
made about color balance as well as just luminance (Figure 7.5). It shows
color balance and also when individual color channels are clipping.
measuring.digital..123.
Figure 7.10 (above) The Spyder5 color
checker can be used on monitors to
achieve correct color balance and
brightness
Figure 7.11. (right, top) The vector-
scope display is based on the color
wheel Every color is assigned a numeric
value based on its position around the
circle, beginning with zero at the 3:00
o’clock position In this diagram, the pri-
mary and secondary colors are shown
as discrete segments for clarity, but, of
course, in reality, the colors are a con-
tinuous spectrum
Figure 7.12. (right, below) A typical
image shown on the vectorscope We
can see that this frame has a good deal
of red and some blue with not too much
of the other colors
YCBCR
This display is a bit trickier to interpret; it shows luminance frst, followed
by the color diference signals: Cb and Cr. Cb is blue minus luma and Cr
is red minus luma. The two color signals will be markedly diferent from
the left-hand luma reading—it contains the entire luminance range of the
picture, where the Cb and Cr only refect whatever color might be in the
picture. Since the luminance is removed from these two signals, they are
much smaller than the luma signal. In practice, it is seldom used on the set.
When interpreting the YCbCr signals, remember that in 8-bit black is 16,
and white is 235 (although it may be converted internally for display), and
you want your signal to fall in between those limits. YCbCr is both a color
space and a way of encoding RGB information. The output color depends
on the actual RGB primaries used to display the signal.
COLOR BARS IN DETAIL
On the old-style SMPTE color bars test chart (Figure 7.1), the top two-
thirds of the frame are seven vertical color bars. These color bars have been
ofcially replaced but you’ll see them in many places, so we need to talk
124..cinematography:.theory.and.practice.
about them. They have been replaced by the ARIB/SMPTE color
bars, which we’ll explore in a moment (Figures 7.8 and 7.9). Start-
ing at the left, the bars are 80% gray, yellow, cyan, green, magenta,
red, and blue. The color patches are at 75% intensity—commonly
called “75% bars.”
In this sequence, blue is a component in every other bar—as we
will see in calibrating monitors, this is a very useful aspect. Also, red
is on or of with every other bar, while green is present in the four
left bars and not present on the three on the right. Because green is
the largest component of luminance (brightness), this contributes to
the stair step pattern which descends evenly from left to right when
viewing on the waveform monitor. Below the main block of bars is a Figure 7.13. (top) Color bars displayed
on the Odyssey 7q+ combination moni-
strip of blue, magenta, cyan, and gray patches. When a monitor is tor/external recorder Note the boxes for
set to “blue only,” these patches, in combination with the main set the primaries red, green, and blue, and
of color bars, are used to calibrate the color controls; they appear the secondaries magenta, cyan, and
yellow When everything is OK with the
as four solid blue bars, with no visible distinction between the bars signal being generated by the camera,
and the patches, if the color controls are properly adjusted. We’ll the corresponding patches on the color
bars will fall into these boxes Remem-
look at this calibration procedure in more detail in a moment— ber this is an internal camera signal, so
calibrating the monitors on the set is one of the most critical steps it doesn’t serve the same purposes as a
of preparing for the day’s shooting. test chart in front of the camera (Photo
courtesy Convergent Design)
The lower section of the SMPTE color bars contains a patch of
white (100%) and a series of small patches of black and near black, Figure 7.14. (above) The SpyderCheckr
24 produces color correction presets
called the PLUGE, which stands for Picture Line-Up Generation for a number of applications, includ-
Equipment. It was developed at the BBC as a way to make sure all ing Lightroom, ACR, and DaVinci Resolve
the cameras throughout the building were calibrated to the same 11 This can be useful when it comes to
adjusting video capture from diferent
standard. It was produced by a signal generator in the basement and types of cameras, such as GoPros and
sent to all the studios so that engineers could calibrate cameras and DSLRs (Photo courtesy Datacolor)
monitors.
USING THE PLUGE IN MONITOR CALIBRATION
In HD color bars, the PLUGE is underneath the red primary color
bar (Figure 7.6); on the UHD color bars it on the bottom row at
center right (Figure 7.8). In HD, it comprises three small vertical
bars, a right most one with intensity just above the saturated black
level, a middle one with intensity exactly equal to saturated black,
and a left most one with intensity just below saturated black (or
“blacker than black”).
measuring.digital..125.
Figure 7.15. The ChromaDuMonde on
the vectorscope The color patches are
distributed evenly as you would expect;
however, notice that they do not reach
“the boxes” on the graticule This is The “Skin Tone Line”
because the DSC charts are designed to
show full saturation when the vector-
scope is set at 2x gain and Final Cut Pro X
(unlike professional vectorscopes) does
not have a setting for “times two ”
Also, note how the skin tone patches fall
very close to each other despite the fact
that they are skin tones for widely vary-
Skin Tone
ing coloration In fact, human skin tone Patches
varies mostly by luminance and not by
color
measuring.digital..127.
Figure 7.17. The DSC Labs ChromaDu-
Monde on the waveform monitor in
Final Cut Pro X Notice that the Cavi-
Black gets all the way to zero—a reliable
reference The white patches do not go
to 100% as expected, but to 90% You
can also see the luma (brightness) dis-
tributions of the various color patches
and skin tone patches
measuring.digital..133.
Figure 7.25. The X-Rite ColorChecker
Classic. (Photo courtesy X-Rite Photo
and Video)
measuring.digital..135.
Figure 7.28. (above) The Siemens Star
is an excellent way to judge focus but
there are several other focus targets
from various companies
Figure 7.29. The AbelCine resolution A newer test reference is the Cambelles by DSC Labs (Figure 8.13, 8.14,
test chart
and 8.15 in the chapter Exposure). Carefully calibrated and printed, it has
four models with typical and representative skin tones. When shooting
camera tests, lighting tests or camera calibration, it is always important to
include a live person in the frame along with charts—actual skin reacts to
light in a way that no test chart can.
MEASURING IMAGE RESOLUTION
Testing resolution/sharpness is done with test targets that have progres-
sively smaller detail—the point at which the viewer can no longer see the
fne details is the limit of resolution. Resolution units can be tied to physi-
cal sizes (e.g. lines per mm, lines per inch), or to the overall size of a picture
(lines per picture height).
A resolution of ten lines per millimeter means fve dark lines alternating
with fve light lines, or fve line pairs (LP) per millimeter (5 LP/mm). Lens
and flm resolution are most often quoted in line pairs per millimeter;
some charts are measured in lines per picture height (LPPH). Some tests are
expressed in lines, while some are in line pairs. Moreover, the results might
also be in lines per picture height rather than lines per millimeter. Figure
7.29 shows the AbelCine test chart.
AbelCine has this advice about using their chart: “When the lens is opti-
mally focused, you will see high contrast in low to medium spatial frequen-
cies (i.e. where there are larger features in the image), maximum detail in
higher frequencies (i.e. in fner features), as well as minimum chromatic
artifacts (i.e. false color). All of these aspects can be observed individually
on the chart. Start by looking at the patterns representing lower frequen-
cies (exactly which patterns will depend on the resolution of the camera
and lens, as well as the size of the pattern relative to the frame).
“As you rotate the focus barrel on the lens, the coarser patterns will
increase in sharpness and contrast. You may see a slight shift in color when
you pass the point of maximum focus. To achieve fne focus, you may
need to engage the image zoom function of your camera and/or monitor.
Slowly change the focus on the lens while looking at the fnest pattern that
shows any detail, and fnd the point that exhibits maximum sharpness in
the fnest visible pattern.” The circle with colors is used for testing chro-
matic aberration—if there is color fringing on the edges of these patches, then
the lens sufers from chromatic aberration, meaning not all wavelengths
are focused to precisely the same plane of focus.
136..cinematography:.theory.and.practice.
08
exposure
EXPOSURE THEORY
Frankly, exposure can get pretty technical, so it’s important to grasp the
basic concepts frst before we plunge into the world of exposure control
for technical and artistic purposes. Let’s take a look at exposure the simple
way just to ease into it. As a cinematographer or DIT, you may think you
completely understand exposure (and you may well do), but it is also very
possible that there’s a lot more to it than you may have thought.
This introduction is a bit simplifed, but it will provide a working under-
standing of exposure that is useful without being too technical. First of all,
there is one notion that has to be put away right now: some people think
of exposure as nothing more than “it’s too dark” or “it’s too light”—that’s
only part of the story. There are many other important aspects of expo-
sure that are vitally important to understand.
WHAT DO WE WANT EXPOSURE TO DO FOR US?
What is it we want from exposure? More precisely, what is “good” expo-
sure and what is “bad” exposure? Let’s take a typical scene, an average one.
It will have something in the frame that is very dark, almost completely
black. It will also have something that is almost completely white, maybe
a white lace tablecloth with sun falling on it. In between, it will have the
whole range of dark to light values—the middle tones, some very dark
tones, some very light tones.
From a technical viewpoint, we want it to be reproduced exactly as it
appeared in real life—with the black areas being reproduced as black in
the fnished product, the white areas reproduced as white, and the middle
tones reproduced as middle tones. This is the dream.
Now, of course, there will be times when you want to deliberately under
or overexpose for artistic purposes, and that is fne, although you need to
Figure 8.1. (top) A Zeiss Prime wide be very cautious and absolutely you need to test in advance. In this dis-
open at T/2 1, looking down the barrel
and on the scale cussion, we are only talking about theoretically ideal exposure, but that
is what we are trying to do in the vast majority of cases anyway. So how
Figure 8.2. (above) The lens stopped all do we do that? How do we exactly reproduce the scene in front of us?
the way down to T/22
Let’s look at the factors involved and what efect they have on the negative
and ultimately the fnal print as it will be projected. Figure 8.3 shows flm
negative underexposed, overexposed, and at normal exposure.
THE BUCKET
Let’s talk about the recording medium itself. In flm shooting it is the raw
flm stock; in digital, it is the sensor chip, which takes the light that falls
on it and converts it to electronic signals. For our purposes here, they are
both the same: exposure principles apply equally to both flm and video,
with some exceptions. They both do the same job: recording and storing
an image that is formed by patterns of light and shadow that are focused on
them by the lens. In this context, we’ll only mention flm exposure when
it serves the purpose of illustrating a point or highlighting an aspect of
general exposure theory.
Think of the sensor/recording medium as a bucket that needs to be flled
with water. It can hold exactly a certain amount of water, no more, no
less. If you don’t put in enough water, it’s not flled up (underexposure).
Too much and water slops over the sides and creates a mess (overexposure).
What we want to do is give that bucket the exact right amount of water,
not too much, not too little—that is ideal exposure. So how do we control
how much light reaches the sensor? Again, in this regard, digital camera
sensors are no diferent from flm emulsion.
CONTROLLING EXPOSURE
We have several ways of regulating how much light reaches the flm or dig-
ital camera sensor. The frst of these is the iris or aperture, which is noth-
ing more than a light control valve inside the lens (Figures 8.1 and 8.2.)
Obviously, when the iris is closed down to a smaller opening, it lets less
light through than when it is opened up to a larger opening. How open or
138..cinematography:.theory.and.practice.
A B
C D
E F
Figure 8.3. Exposure on flm (A) is normal exposure—good contrast, a full range of tones, and minimal grain (B) is the nega-
tive of the normal exposure Notice that it also has a full range of tones from near total black (which will print as white) to
almost clear negative (which will print as black) (C) is a severely underexposed frame—three stops under It's dark but also
very grainy and doesn't have a full range of tones, hardly anything above middle grays (D) is the negative of the badly under-
exposed shot; it's what we call a "thin" negative (E) is three stops overexposed and (F) is the negative of this shot, it's what is
called a "thick" negative, difcult to get a good print from
closed the iris is set for is measured in f/stops (we’ll talk about that in more
detail later). Remember, the flm or sensor wants only so much light, no
more no less (they’re kind of dumb that way). If our scene, in reality, is in
the bright sun, we can close down the iris to a small opening to let less of
that light through. If our scene is dark, we can open up the iris to a wider
opening to let in all the light we can get—but sometimes this will not be
enough. There are other things that control how much light reaches the
image plane, which we’ll talk about.
exposure..139.
LUX LIGHTING CHANGE THE BUCKET
100,000 + Direct sunlight There is another, more basic way to change the exposure: use a diferent
bucket. Every digital sensor has a certain sensitivity to light; it’s part of
10,000+ Indirect sun
their design. This means that some are more sensitive to light, and some are
1,0000 Overcast sky less sensitive. It is rated in ISO which stands for International Standards Orga-
500 Clear sunrise nization. It was previously called ASA for American Standards Organization.
200-500 Ofce lighting Although the acronym ISO has many other uses (because the organization
80 Hallway publishes standards for all sorts of things), in the world of cinematography
10 Twilight
it signifes a “rating” of the sensitivity of the camera/sensor; the two must
be thought of as a unit because some aspects of ISO are actually handled
5 Street lights
in the Digital Signal Processor and elsewhere in the camera circuitry, not just
1 Candle at 1 m exclusively in the sensor. In some cases, the term EI (Exposure Index) is
1 Deep twilight used to indicate the sensitivity of the recording medium. The diference is
that while ISO is derived from a specifc formula, EI is a suggested rating
Table 8.1. Typical light levels in various based on what the manufacturer believes will give the best results.
situations Clearly, there is an enormous
range A sensor with a low sensitivity needs lots of light to fll it up and make a
“good” image. A high-speed flm is like using a smaller bucket—you don’t
need as much to fll it up. A low-speed sensor is like a larger bucket—it
takes more to fll it up, but on the other hand we have more water. In the
case of flm and video images “having more water” in this analogy means
that we have more picture information, which in the end results in a better
image. As we’ll see later on, this is one important diference between how
flm and HD cameras work and how most cameras that shoot RAW work.
Art Adams puts it this way: “Speed in a sensor is similar to speed in flm:
it’s about how far down the noise foor/base fog level you push things.
Natively all [digital] sensors seem to be roughly the same ISO (this used to
be 160, but now it appears to be 320), but what separates the adults from
the kids is how well they handle noise. The farther down the noise foor,
the more dynamic range a sensor will have. It all comes down to what
has been done on the sensor, and in internal processing, to eliminate noise
from the signal.
One aspect of speed relates to how big the photo sites are on the sensor.
Bigger ones take less time to fll and are more sensitive; smaller ones take
longer to fll and are less sensitive. In this way your water analogy works
well.”
THE ELEMENTS OF EXPOSURE
So we have several elements to contend with in exposure:
• The amount of light falling on the scene. Also, the refectance of
things within the scene.
• Aperture (iris): a light valve that lets in more or less light.
• Shutter speed: the longer the shutter is open, the more light
reaches the flm or sensor. Frame rate also afects shutter speed.
• Shutter angle: the narrower the angle, the less light reaches the
sensor.
• ISO (sensitivity). Using a higher ISO flm is an easy fx for insuf-
fcient exposure, but it involves the penalty of more video noise
and an image that is not as good.
• Neutral Density (ND) flters can reduce the light when it is too
much for the aperture and other methods to deal with.
LIGHT
Intensity of light is measured in foot-candles (in the United States) or in
lux in metric (SI) countries. A foot-candle (fc) equals 10.76 lux. A foot-
candle is the light from a standard candle at a distance of one foot (it’s like
the standard horse in horsepower). One lux is the illumination produced
by one standard candle at 1 meter. A sensor exposed to a standard candle
1 meter away, receives 1 lux of exposure. Table 8.1 shows some typical
lighting levels for common situations. These are just general averages, of
140..cinematography:.theory.and.practice.
course, individual situations may vary greatly, especially when it comes to
interiors. Some typical points of reference include:
• Sunlight on an average day ranges from 3,175 to 10,000 fc (32,000
to 100,000+ lux ).
• A bright ofce has about 40 fc or 400 lux of illumination.
• Moonlight (full moon) is about 1 lux (roughly a tenth of a foot-
candle).
As you can see the brightness range between the darkest situations humans
encounter and the brightest situations is over 100,000 to 1—a fairly amaz-
ing range for the eye.
F/STOPS
Most lenses have a means of controlling the amount of light they pass
through to the flm or video sensor; this is called the aperture or iris and its
setting is measured in f/stops. The f/stop is the mathematical relationship
of overall size of the lens to the size of the aperture.
Stop is a short term for f/stop. On a lens the f/stop is the ratio of the
focal length of the lens to the diameter of the entrance pupil. This works
out to each f/stop being greater than the previous by the square root of
2. Opening up one stop means twice as much light is passing through the
iris. Closing down one stop means that 1/2 as much light is going through.
To be exact, the entrance pupil is not the same thing as the size of the
front element of the lens but they are related. An f/stop is derived from
the simple formula:
f = F/D which translates to:
f/stop = Focal length/Diameter of entrance pupil
F/stops are frequently used in lighting as well; they don’t only apply to
lenses. In lighting, the fundamental concept is that one stop more equals
double the amount of light and one stop less, means half the amount of
light.
If the brightest point in the scene has 128 times more luminance than the
darkest point (seven stops), then we say it has a seven stop scene bright-
ness ratio. We’ll see that this plays an important role in understanding the
dynamic range of a camera or recording method.
You will sometimes hear the related term T/stop (Transmission stop).
It’s the same idea but derived in a diferent manner. F/stop is determined
by the simple calculation shown above. The drawback is that some lenses
transmit less light than the formula indicates. A T/stop is determined by
actually measuring the transmission of the lens on an optical bench. T/
stops are especially useful for lenses that transmit less light due to their
design, such as zoom lenses which may have a dozen or more glass ele-
ments. Most lenses these days are marked only in T/stops; some older
lenses had both markings. They are usually on opposite sides of the lens
barrel, and, depending on the manufacturer, they may be marked in dif-
ferent colors.
SHUTTER SPEED/FRAME RATE/SHUTTER ANGLE
These three work together in determining exposure. Film cameras (and a
few video cameras) have rotating shutters that either allow light to reach
the recording medium or close of the light. Frame rate or frames per
second (FPS) applies to both flm and video. Obviously, if the camera is
running at a very low frame rate (such as 4 FPS), each frame will get more
exposure time. At a higher frame rate, such as 100 FPS, each frame will be
exposed to light for a much shorter time. When a camera has a physical
shutter, as all flm cameras and only a few video cameras do, the shutter
angle may be 180° (open half the time, closed half the time) or may have
a variable shutter angle. In conjunction with frame rate, this also afects
shutter speed.
exposure..141.
Figure 8.4. The Hurter and Drifeld THE RESPONSE CURVE
(H&D) characteristic curve shows the
exposure response of a generic type In the 1890s, Ferdinand Hurter and Vero Drifeld invented a way to mea-
of flm—the classic S-curve The toe is sure how exposure afected a flm negative. The H&D diagram is still used
the shadow areas; the shoulder is the today. Figure 8.4 shows a typical H&D response curve for flm negative.
highlights of the scene The straightline
(linear) portion of the curve represents The X-axis indicates increasing exposure. The Y-axis is increasing negative
the midtones The slope of the linear density, which we can think of as brightness of the image—more expo-
portion describes the gamma (con-
trastiness) of the flm—it is diferent in sure means a brighter image. To the left on the X-axis is the darker parts
video As we’ll see later, although the of the scene, commonly called the shadows. On the diagram, it is called
output of video sensors tends to be
linear (not an S-curve like this) it is quite the toe. To the right on the X-axis are the brighter parts of the scene—the
common to add an S-curve to video at highlights, called the shoulder on the diagram. In video, this area is called
some point the knee. The middle part, which is linear, represents midtones.
UNDEREXPOSURE
Figure 8.5 shows underexposure. All of the original scene brightness
values are pushed to the left. This means that highlights in the scene are
recorded as just light or medium tones. The shadows are pushed down to
where there is no detail or separation recorded because the response curve
at that point is essentially fat—decreases in exposure at this point result in
little or no change in the image brightness—detail and separation are lost.
OVEREXPOSURE
Figure 8.6 shows overexposure—the scene brightness values are pushed to
the right. Dark areas of the scene are recorded as grays instead of variations
of black. On the right, the scene highlights have no separation or detail—
they are on the fat part of the curve; increases in exposure in the fat part
don’t result in changes in the image: detail and separation are lost.
CORRECT EXPOSURE
Looking at Figure 8.7 we see how theoretically correct exposure places
all of the scene values so that they ft nicely on the curve: highlights go
up to just where the curve fattens and scene shadows only go down to
where they are still recorded as slight variations in image brightness. This
is always the goal—to get the brightness values of the scene to ft between
the two extremes of overexposure and underexposure, both of which have
negative consequences for the fnal image.
142..cinematography:.theory.and.practice.
Figure 8.5. (top) Although the shape of
the curve might be diferent for video,
the basic principles remain the same
as flm, as shown here Underexposure
pushes everything to the left of the
curve, so the highlights of the scene
only get as high as the middle gray part
of the curve, while the dark areas of the
scene are pushed left into the part of
the curve that is fat This means that
they will be lost in darkness without
separation and detail
Figure 8.6. (middle) With overexposure,
the scene values are pushed to the right,
which means that the darkest parts of
the scene are rendered as washed out
graysand the highlights are lost
Figure 8.7. (bottom) Correct exposure
fts all of the scene brightness values
nicely onto the curve As we’ll see later
in this chapter, the visual representation
here is very similar to what we see in a
histogram
exposure..143.
are hopelessly of the scale. If we “expose for highlights” (by closing down
to a smaller f/stop), we record all the variations of the light tones, but the
dark values are pushed completely of the bottom edge and don’t record
at all; there is no information on the negative, no detail to be pulled out.
TWO TYPES OF EXPOSURE
There are two ways to think about exposure: overall exposure and balance
within the frame. So far we’ve been talking about overall exposure of the
entire frame; this is what you can control with the iris, shutter speed, and
some other tools, such as neutral density flters, which reduce the total
amount of light.
You also have to think about balance of exposure within the frame. If
you have a scene that has something very bright in the frame and also
something that is very dark in the frame, you may be able to expose the
whole frame properly for one or the other of them, but not both. This is
not something you can fx with the iris, aperture, changing ISO, or any-
thing else with the camera or lens. This is a problem that can only be fxed
with lighting and grip equipment; in other words, you have to change the
scene. Another way to deal with it in exterior shooting is to shoot at a dif-
ferent time of day, change the angle or move the scene to another place,
such as into the shadow of a building. For more on lighting, see Motion
Picture and Video Lighting by the same author as this book.
HOW FILM AND VIDEO ARE DIFFERENT
There is one crucial way in which flm and video are diferent. With older
HD cameras (which had the look “baked in”), it was absolutely critical that
you not overexpose the image. This is not as critical with negative flm.
Film stock is fairly tolerant of overexposure and doesn’t do as well with
underexposure; HD, on the other hand, is very good with underexposure,
but remember, you will always get a better picture with exposure that is
right on the money: this is the crucial thing to remember about exposure.
WE’LL FIX IT IN POST
One thing you will hear sometimes on a set is “don’t worry, we’ll fx it in
post.” There is nothing wrong with making an image better in postproduc-
tion. What you don’t want to do is take the attitude that you can be sloppy
and careless on the set because “everything can be fxed in post.” It’s not
true. Fine-tuning an image, ensuring consistency of exposure and color,
and going for a specifc “look” in post is an important part of the process.
However, this is not to be confused with “fxing” a mistake, which almost
never results in a better image, or even an acceptable one.
THE BOTTOM LINE
Exposure is about much more than just it’s “too dark” or “too light.” It’s
also about whether or not an image will be noisy, it’s about the overall
contrast of the image, and it’s about whether or not we will see subtleties
in the shadows and the highlights. Overexposure and underexposure will
desaturate the color of the scene; this is particularly important in green-
screen shooting. The bottom line is this: you will get the best image pos-
sible only when your exposure is correct. This is true of still photos on
flm, motion picture flm, digital photography, and, of course, video.
EXPOSURE IN SHOOTING RAW VIDEO
Some people make the mistake of thinking that just because you’re shoot-
ing RAW video that exposure isn’t critical—“It’s all captured in RAW so
anything can be fxed later.” This is a myth of course; you can still screw
up the image with bad exposure just as you can with any other type of
image acquisition. Underexposure can result in a noisy image and over-
exposure can cause clipping. Clipped highlights cannot be fxed. There is
no magic software, no cute little button you click in post, that will bring
detail back into the highlights—clipping means that there is just no infor-
mation there at all, nothing to recover.
144..cinematography:.theory.and.practice.
Figure 8.8. (top) A normally exposed 11
stop grayscale and it’s trace on a wave-
form monitor
Figure 8.9. (second down) This very
overexposed 11 step grayscale shows
clipping in the lighter steps and the
darkest steps are more light gray than
they should be These grayscales and
the waveform traces that represent
them would apply equally to footage
shot on digital and footage shot on flm
that has been transferred to digital
Figure 8.10. (third down) Here an
attempt has been made to “save” this
shot by reducing brightness in postpro-
duction It does nothing to bring back
any separation to the clipped steps
and reduces the dark steps till they
just mush together Only middle gray
is where it should be and that is only
because we placed it there in post
Figure 8.11. (fourth down) A severely
underexposed frame of the grayscale
Not only are the highlights just mushy
gray but the darker tones are crushed—
there is much less separation in the
steps The lowest steps are also deeply
into video noise
Figure 8.12. (bottom) Trying to “save”
the underexposed shot by bringing
middle gray back up does nothing to
bring the separation between the steps;
it only serves to make the darkest steps
muddy gray, and, of course, the video
noise will still be there
exposure..151.
Figure 8.27. Red’s Goal Posts are also Adam Wilt has this to say about these tools, “The trafc lights seem inter-
helpful in judging exposure They are esting but in the end, how valuable is it really to know that ‘2% of pixels
the two vertical stripes at the far right
and left of the histogram The height in that channel’ are clipping? In my Red work, I fnd them useful as ‘idiot
of the bar on the right indicates what lights’: I can tell at a glance if there’s a possibility I might be getting into
percentage of pixels are clipped or in
noise The bar on the left indicates what trouble. They don’t replace a careful study of the histogram; what they do
percentage of pixels are in noise Full is say, ‘hey, buddy, you might want to take a closer look here...’ and they
scale on these indicators is only 25% of
total pixels, not all of them as you might say it even when I’m too busy to be focusing on the histogram, because I’m
expect focusing instead on movement and composition.
“Same with the Goal Posts, they are halfway between the see-it-in-a-
fash ‘idiot lights’ of the trafc lights and the study-intensive histogram.
They show me (a) I have stuf in the scene that exceeds the range of the
capture system at either or both ends of the tonal scale, and (b) by com-
paring their relative heights, I can see quickly if I’m losing more shadow
details, or more highlights, or if I’ve balanced the losses equally (assum-
ing that’s what I want, of course). I use ‘em all: trafc lights as a quick-
and-dirty warning, goal posts as a highlights/shadows balancing indicator,
and the histogram or an external WFM to see in detail what the signal is
doing. The trafc lights and goal posts don’t show me anything I can’t get
from the histogram or WFM, but they show it to me very quickly, with a
minimum of focus and concentration required on my part to interpret the
results. It’s nice to have choices.”
FALSE COLOR EXPOSURE DISPLAY
Many pro cameras and feld monitors now ofer the choice of false color,
which displays diferent tonal ranges coded in various “false” colors
(Figure 8.28 and Tables 8.3 and 8.3). As with any color code system, it’s
worthless unless you know the key. Although diferent camera companies
use their own set of colors, they usually have some commonalities. False
color displays can usually be turned on or of either in menus or with
buttons on the exterior of the camera that can be assigned to any one of
several diferent camera controls (depending on the wishes of each indi-
vidual user). Once you get to know them, they can be a useful guide to
exposure, but they can also interfere with viewing the scene while operat-
ing the camera. Many cinematographers use them when lighting the scene
and while determining the proper exposure for the scene and not while
shooting the scene.
RED FALSE COLORS
Red cameras have two false color selections: Video Mode and Exposure Mode.
Exposure mode false colors are shown in Figure 8.30. Exposure Mode is
a more simplifed method which is used with RAW image viewing. Most
of the image will appear as a grayscale but purple will be overlaid on any
parts of the image that are underexposed and red will be overlaid on any
overexposed regions of the frame. Since this applies to the RAW data, it
indicates over and underexposed regions of the frame regardless of the
current ISO or look settings.
152..cinematography:.theory.and.practice.
Grayscale Tonal Values Figure 8.28. False colors can give more
precise exposure information about
individual portions of the image, but
only if you know the code! At top is
Black White the grayscale including superblack and
superwhite Below that is the color code
for Red camera false colors, then Alexa’s
color code and fnally the Red camera
Red Camera False Colors: Video Mode Exposure Mode at bottom
Orange
Purple
Yellow
Green
Straw
Pink
Red
Arri˜ex False Colors
Purple
Yellow
Green
Blue
Pink
Red
Red Camera False Colors: Exposure Mode
Purple
urple
Red
COMPARING RED EXPOSURE MODES
Table 8.2 shows the specifcations of the three Red exposure modes—
Video Mode, Zebra Mode, and Exposure Mode. Zebra and Video Mode
are based on IRE or RGB values, which is a relative scale based on the
output signal sent to the monitor, not necessarily the values of the scene.
Red puts it this way: “As with other IRE-based modes, zebra mode
is only applicable for the current ISO and look settings (such as with
HD-SDI output)—not for the RAW image data. If anything is changed in
post-production, the indicators won’t be representative of the fnal output
tones. In those situations, zebra mode is, therefore, more of a preview and
output brightness tool than an exposure tool.” Despite this, they can still
be useful in some circumstances. In general, most exposure tools are mea-
suring the image with the various adjustments to the look already applied.
RAW, in this context, is an absolute scale based on the output of the
sensor. It is not necessarily related to the viewing image in terms of bright-
ness values. Red says: “This is most useful when trying to optimize expo-
sure and looking toward post-production.” This is, of course, the basic
concept of shooting RAW vs. shooting video that is more or less “ready to
go.” Shooting RAW/Log is not about producing a fnal image, it’s about
producing a “digital negative.” The downside is that the images are not
directly viewable and this makes using exposure tools like zebras and his-
tograms pretty much useless—they can only be used as approximation.
With a Red camera, the exposure mode you choose will determine what
type of false color schemes you see displayed in the viewfnder. Red sums
up their overall recommendation for using these tools: “First, in Exposure
Mode, use the purple and red indicators to adjust your lighting or lens
aperture. The strategy is usually to achieve an optimal balance between
clipping from overexposure and image noise from underexposure. With
most scenes, there can be a surprising range of exposure latitude before
excessive red or purple indicators appear.”
ARRI ALEXA FALSE COLORS
The Arri Alexa has similar false color codes but they are a bit simpler
(Figure 8.28 and Table 8.3.) Arri’s color code has fewer steps which some
fnd makes it easier to read in the viewfnder or monitor. Green is middle
gray (38% to 42%) and Pink is average Caucasian skin tone (52% to 56%).
Red (which they call White Clipping) is 99% to 100% and Purple, which
they call Black Clipping, is 0% to 2.5%.
exposure..153.
Figure 8.29. (top) A frame from a Red
camera in standard mode (Courtesy the
Red Digital Cinema Camera Company)
Figure 8.30. (bottom) The same frame
in Exposure Mode: red indicates clipping
(overexposure) and purple shows parts
of the frame in noise (under nominal
exposure) (Courtesy the Red Digital
Cinema Camera Company)
STRATEGIES OF EXPOSURE
We’ve looked at the many tools available for judging exposure. But what
about the basic philosophy of exposure—it’s not a purely mechanical, by-
the-book procedure—every experienced cinematographer has their own
way of working, their own favorite tools and bag of tricks. As with just
about everything in flmmaking, it’s not about the right or the wrong
way—it’s about achieving the desired fnal product.
Adjustments to be made for individual cameras, artistic preferences, and
so on. The goal is to get an image that will yield the best results in the end.
An individual cinematographer will develop methods that work for them
based on experience, testing, and viewing the dailies.
DON’T LET IT CLIP, BUT AVOID THE NOISE
Homer (not Homer Simpson, the other one) wrote about two mythical sea
monsters on each side of the Strait of Messina, a dangerous rocky shoal on
one side and a deadly whirlpool on the other. Called Scylla and Charyb-
dis, they are the origins of the terms “between a rock and a hard place.” In
shooting video, our Scylla and Charybdis are clipping at the brightest areas
of the scene and noise in the darker parts of the scene. Technically, clip-
ping is sensor saturation; camera manufacturers refer to it as well overfow,
in reference to the wells of the photosites.
The noise foor is not the same as the lower black limit. It is the
level of sensor output with the lens capped—no photons hitting the
RED EXPOSURE MODES Basis Levels Adjustable?
Exposure Mode RAW 2 No
Table 8.2. Red’s Exposure modes and
specifcations Notice that Video Mode Video Mode IRE 9 No
and Zebra Mode are only applicable in Zebra Mode IRE 1–3 Yes
IRE (Rec 709, not RAW)
154..cinematography:.theory.and.practice.
sensor at all. In this condition, there is still some electrical output from Table 8.3. Numerical values and colors
the photosites; simply because all electrical systems have some degree of of the Arri Alexa False Color system
(Courtesy of Arri)
randomness. There is noise everywhere in the sensor, but it is usually most
noticeable in the darkest areas of the image.
The Red company summarizes: “Optimal exposure starts with a deceiv-
ingly simple strategy: record as much light as necessary, but not so much
that important highlights lose all texture. This is based on two fundamen-
tal properties of digital image capture:
“Noise. As less light is received, image noise increases. This happens
throughout an image with less exposure, but also within darker regions of
the same image for a given exposure.
“Highlight Clipping. If too much light is received, otherwise continuous
tones hit a digital wall and become solid white. Alternatively, this might
happen in just one of the individual color channels, which can cause inac-
curate coloration. Unlike image noise, which increases gradually with less
light, highlight clipping appears abruptly once the clipping threshold has
been surpassed.”
TEXTURE & DETAIL
Some terms you’ll hear frequently when discussing exposure are texture,
detail, and separation. For example, textured white, textured black, detail
in the highlights, or separation in the shadows. These are all essentially the
same thing and are important references in exposure. The concepts origi-
nated with Ansel Adams, who needed terms to describe how far exposure
(negative density in his case as he worked with flm) can go before they are
just pure black or pure white; soot and chalk as he sometimes called them;
just featureless regions of the image with no detail at all. Textured white
is defned as the lightest tone in the image where you can still see some
texture, some details, some separation of subtle tones.
Difuse white is the refectance of an illuminated white object. Since per-
fectly refective objects don’t occur in the real world, difuse white is about
90% refectance; roughly a sheet of white paper or the white patch on a
ColorChecker or a DSC test chart. There are laboratory test objects that are
higher refectance, all the way up to 99.9% but in the real world, 90% is the
standard we use in video measurement. It is called difuse white because it
is not the same as maximum white. Difuse means it is a dull surface (such
as paper), that refects all wavelengths in all directions.
Specular highlights are things that go above 90% difuse refectance.
These might include a candle fame or a glint of a chrome car bumper.
As we’ll see later, being able to accommodate these intense highlights is a
big diference between old-style Rec.709/HD video and the modern cam-
eras and fle formats such as OpenEXR. Textured black is the darkest tone
where there is still some separation and detail. Viewed on the waveform,
it is the step just above where the darkest blacks merge indistinguishably
into the noise.
exposure..155.
Figure 8.31. (top) At top is a Red frame
two-and-a-third stops underexposed
Figure 8.32. (bottom) Red’s FLUT in use
A FLUT of 2 3 has been applied, bringing
the shot back to fairly normal exposure
In this case, the FLUT has been applied
in Assimilate Scratch
2 - 1/3 stops
underexposed
FLUT = 0
FLUT = 2.3
THE DILEMMA
What is a great leap forward for shooting with these modern cameras is
also something of a disadvantage—recording log-encoded video gives us
huge benefts in terms of dynamic range, but it also means the images the
camera is outputting and recording are in no way WYSIWYG (what you
see is what you get). In fact, on the monitor, they look awful—low in
contrast and saturation. Worse, the waveform monitor and vectorscope
are not showing us the reality of the scene as it will be in the end result.
156..cinematography:.theory.and.practice.
A commonly used method is to display a Rec.709 conversion of the Figure 8.33. (top, left) The classic
image on the set monitors. This makes the scene viewable, but it is only method of incident metering—hold
the meter at the subject’s position and
an approximation of what is being recorded and a distant representation point the dome at the camera Techni-
of what the image will eventually look like. Likewise, in a scene with a cally it is called the Norwood Dome
Rec.709 viewing LUT applied, the waveform monitor isn’t telling us the Figure 8.34. (top, right) Using the same
whole story. Not just that they don’t always have the beneft of fne-tuned Sekonic meter as a spot meter allows
color correction, but that they don’t show us what the image will look like the DP to check the exposure levels of
diferent parts of the scene based on
after it has been “developed,” using the metaphor that RAW data is like a a combination of the lights hitting the
flm negative. objects and their inherent refectance It
is also the only way to efectively meter
“Developing,” in this case, is not a photochemical process, but is the objects that generate their own light,
deBayering and log to linear conversions that will take place outside of such as a lampshade, a neon sign, or a
the camera. If you’ve ever looked at original camera negative (OCN) you campfre
know that it is nearly impossible to comprehend visually; not only is it a
negative in that tones are reversed, but also the colors are all opposite and
there is a heavy orange mask. Cinematographers have developed the skill
of looking at camera negative and getting a sense of what is a thick (over-
exposed) or thin (underexposed) image, but any understanding of color is
impossible. These concepts come into play as we consider exposure with
cameras that record log data either internally or on an external recorder.
Some DPs and DITs prefer to have the log-encoded image on the wave-
form monitor and a viewing LUT applied for the monitors.
But how to achieve these goals of having an ideal digital negative? What
methods get you there reliably and quickly—speed is always a factor in
motion picture production.
USING LIGHT METERS
Many DPs have returned to using their light meters, both incident and
refectance (spot) meters; this is after years of using them infrequently if
at all when shooting HD video, where factors such as gamma, knee, black
gamma, etc. altered the image enough to make light meter readings less
relevant or even misleading. When shooting RAW/log, the waveform
suddenly becomes less precise because the image is displayed at very low
contrast. Art Adams puts it this way, “I fnd myself using an incident meter
much more often now. I rough in the lighting and then check it on a moni-
tor and waveform, and then use the incident meter for consistency when
moving around within the scene. I still pull out my spot meter occasion-
ally but I’m becoming less reliant on it. The incident meter helps me get
my mid-tones right and keep them consistent, and the monitor and wave-
form tell me about everything else.”
METER THE KEY
This is the age-old formula for exposing flm: use an incident meter (with
proper ISO, frame rate, flter factors, and shutter angle dialed in on the
meter) to read the key light (usually right where the actor is) and set the
aperture for that reading (Figure 8.33). This provides an overall average for
exposure..157.
the scene—it is usually remarkably successful. Art Adams says, “If I have
a choice I work strictly by meter. If not I work of the waveform. I use
my meter to set proper mid-tones, and the waveform tells me if I’m losing
detail in either the highlights or the shadows.”
In the same vein, British DP Geof Boyle states, “I’ve gone back totally
to using my incident meter together with my old Pentax digital spot cali-
brated with the zone system. I decide where the skin will be and keep it
there. Overall with the incident reading, spot checks of highlights, shad-
ows, and skin, hang on! That’s how I used to shoot flm. I have expanded
the zone system range by a stop at either end, i.e. where I knew I would
get pure white or black, I now know I will have detail. I use an ISO for the
cameras that I’ve established from testing works for the way that I work;
Figure 8.35. (above) The Luxi dome on that sounds familiar as well. Yes, I will use a waveform on some work; it’s
an IPhone enables the app Cine Meter II
to do incident readings in addition to a great tool there or diamonds [diamond display], but otherwise it’s a rare
its refectance metering and other func- thing for me. For me, it’s a question of dynamic range, once it gets to 12
tions The dome also allows the app to
work as a very accurate color meter stops or more I can relax and get into the pictures rather than the tech.”
Here’s Art Adams on shooting HD: “The big change for me is that I used
to use nothing but a spot meter [when shooting flm] but video gamma
curves are so variable that trying to nail an exposure based on a refected
reading is the true moving target. I can use a spot meter to fnd out how
far overexposed a window will be on a scout but it’s tough to light using
nothing but a spot meter in HD, the way I could in flm. Film stocks had
diferent gammas but we only had to know a couple of them; every HD
camera has at least 7 or 8 basic variations, plus lots of other variables that
come into play.”
USING THE WAVEFORM MONITOR
When shooting HD (Rec.709), the waveform monitor has always been
an important tool for making exposure decisions. It was supplemented
by zebras and histograms, but those are rough guides at best; the wave-
form is accurate, precise, and gives you information about every part of
the scene. The problem, of course, is that if the camera is outputting log/
RAW information, the waveform display doesn’t refect the actual image
as it is being recorded and will be processed later down the line (Figure
9.23 in Linear, Gamma, Log). This output is difcult to judge—to make
any sense of it takes a good deal of experience and mental calculation. As
a result, monitors on the set generally display a converted image; most
often the log image is converted to Rec.709 by a LUT either internal on
the camera, externally through a LUT box, or with a monitor that can
host LUTs. While this makes it easier to interpret, it in no way shows the
real image, particularly the highlights and shadows. Adams says, “A LUT
may not show the real image but LUTs almost always show a subset of the
image that throws away information by applying gamma, etc. I have no
problem working of a Rec.709 converted log signal with a monitor and
a waveform because I know that if it looks nice on set I can always make it
look a lot nicer later in the grade. For example, if the Rec.709 image isn’t
clipping then I know the log image really isn’t clipping. I make the image
look nice and then shoot. I do check the log image on occasion but not
very often.”
Cinematographer David Mullen, says this, “It would be nice to see a
waveform display of the log signal while monitoring a Rec.709 image to
know exactly how much clipping is going on... but most camera set-ups
seem to involve sending Rec.709 from the camera to the monitors so any
waveform on the cart would be reading Rec.709.”
PLACING MIDDLE GRAY
When shooting flm negative, the prevailing practice is to use the incident
meter for setting the f/stop on the lens and using the spot meter to check
highlights such as windows, lampshades, etc. Reading with the incident
meter is the same as placing middle gray at 18%. Some cinematographers
use a similar method when shooting log/RAW video—placing middle
158..cinematography:.theory.and.practice.
gray at the values recommended by the camera manufacturer which we’ll
talk about in more detail in the next chapter. Zone X
Art Adams uses this method: “I go for accurate middle gray, or place
middle gray where I want it, and see how the rest falls. Mid-tones are
where our eyes are the most sensitive. We naturally roll of the extreme
Zone IX
shadows and highlights, just as cameras have been designed to do, so it
makes little or no sense to me to base a scene’s exposure on the part of the
image that our brains naturally compress anyway. I generally expose raw Zone VIII
and log based around middle gray using a meter… unless I’m shooting Textured White
doc-style, and quickly. In that case, I tend to look at the mid-tones and the
highlights together on a waveform.”
Adams adds about the Arri Alexa “Middle gray stays at nearly the same Zone VII
value when toggling between Log C and Rec.709, with the upshot that
the Log C image—which is not designed to be viewable on any kind of
HD monitor—still looks okay when viewed on a Rec.709 monitor. The
bottom line is that middle gray changes very little when toggling between Zone VI
Log C and Rec.709, and this seems to make Log C more ‘monitor friendly’
than other log curves.”
Zone V
START AT THE BOTTOM OR START AT THE TOP Middle Gray
Placing something pure black in the scene at 0% on the waveform seems
tempting but the catch is fnding something truly black. At frst glance,
it seems quite a bit safer than trying to select something in the frame to Zone IV
place at 100% white. As we have seen, what is really white is subjective
and highly variable.
Especially if you are shooting RAW/log and viewing in Rec.709, it is Zone III
extremely difcult to even determine where the clipping level really is.
More confounding, is the white object you selected 90% difuse white,
100% white or something else? When using the DSC Labs ChromaDu- Zone II
Monde chart, the Cavi-Black (Figure 8.39) is pure black but as we saw, it Textured Black
takes a good deal of innovation to make it truly black; a condition very
unlikely to occur in a typical scene or even in an ordinary situation when
testing or setting up a camera. Art Adams puts it this way, “Putting some- Zone I
thing black at the black point is similarly pointless because there’s not a lot
of real black out there unless it’s a dark shadow. Also, if a camera exhibits
wide dynamic range but it isn’t equally split between highlights and shad- Zone O
ows, you’ll always have more stops in the shadows, which means putting
something black at black results in pushing the exposure way down. If
what’s black in one frame isn’t black in another you end up with inconsis- Figure 8.36. Zones as shown by pho-
tent noise and overall exposure.” tographing a towel in sunlight What’s
important here is the concept of texture
and detail
EXPOSE TO THE RIGHT
Expose to the Right (ETTR) is popular with digital still photographers and
is occasionally mentioned in connection with exposure in digital cinema.
The idea is simple—since the dark areas of the image are where there is
the most noise, we want to push the exposure as high as we can without
clipping. On the histogram, the right side is the highlights, so those who
use this method try to move the image toward the right on the histogram
(Figures 8.40 and 8.41).
This method is not without its critics, however; most of them assert that
it is no longer necessary as the noise level of cameras steadily improves.
“Exposing to the right results in inconsistent noise from shot to shot, which
can be jarring, and also tends to result in less consistency of exposure such
that every shot needs its own grade,” says Adams. He adds, “Exposing to
the right or to the left is great for one shot that stands alone… which almost
never happens. When your images sit adjacent to others why would you
base their exposures on the things that change the most between them?”
Geof Boyle comments, “ETTR is not a practical method of working for
us, every shot having to be graded individually and huge measures taken to
match. Equally, whilst you can pull and push flm you can’t do it on a shot
by shot basis.” In practice, cinematographers rarely use ETTR on feature
exposure..159.
Figure 8.37. Adam Wilt’s Cine Meter II is flms or other long-form projects. As an exposure method, it might still
a smartphone app with refectance and be useful for isolated shots, such as landscapes or product shots, and on
incident metering plus waveform and
adjustable false color displays (Courtesy commercials which might consist of only a few shots in the day. The Red
Adam Wilt) camera company has this to say: “Some advocate a strategy called “expose
to the right” (ETTR), whose central principle is to record as much light
as possible without clipping, causing the histogram to appear shifted to
the far right. While this approach works well with stills photography, it
greatly increases the likelihood of clipped highlights with video footage,
since lighting conditions are often more dynamic.”
ZEBRAS
As we have previously discussed, zebras can be operator selected to appear
Typically I expose through a LUT in the viewfnder of many cameras and on some monitors and are a handy,
Knowing where middle gray falls always present (if you choose) check on highlight values and clipping. The
on a log curve is useful in certain
circumstances, but log curves problem is they are based on measuring the IRE (luma) value and are thus
should almost never be viewed as a bit more difcult to use when shooting log. In cases where you are shoot-
an exposure reference If someone ing HD style (that is, basically in Rec.709 mode or WYSIWYG), they are
is watching the log curve directly
they should be looking for clipping very useful. Adams comments, “If I’m running around documentary style
and detail close to the noise foor— then I look at zebras and judge mid-tones in the viewfnder by eye. It’s not
technical issues only, not artistic ideal, but I’m old school enough that I can do it well.”
Log is a storage medium only It’s
not meant to be viewed Knowing THE MONITOR
where a company puts middle gray
can be useful in knowing what they During the HD years, you often heard “Don’t trust the monitor!” Several
are emphasizing: are they storing advances have changed that. First of all, the kind of high-end monitors
more information in the highlights that are likely to be the DP’s main monitor near camera and at the DIT cart
or in the shadows? That’ll tell you
where you have more latitude for have greatly improved in color and tone scale accuracy. Also, much more
risk taking attention is paid to proper calibration.
DP Art Adams David Mullen puts it like this: “I mainly come to rely on the monitor
with occasional double-checks using various devices—meters, wave-
forms, histograms, just not on every shot. The main point is the feedback
loop over time (if this is a long-form project), you check the footage again
in dailies, in the editing room, etc., to see if your exposure technique is
giving you good results, and sometimes you make adjustments when you
fnd that your day work is a bit too hot or your night work is a bit too
dark, etc. I also shoot tests before I begin to see how things look on the set
versus later in a DI theater.”
Viewing conditions are important—a monitor, standing in a brightly lit
area outdoors or even on the set, isn’t going to give you accurate informa-
tion. This is why you’ll see dark hoods around monitors on the set, or at
least, the grips will set 4×4 foppies or other shades to prevent glare on the
monitor. DITs often work inside a tent when setting up outdoors.
160..cinematography:.theory.and.practice.
KNOW THYSELF AND KNOW THY CAMERA Figure 8.38. Cine Meter II with both an
overlay waveform and false colors dis-
Just as you get to know the characteristics of particular cameras, you also played The yellow brackets represent
need to get to know your own inclinations. As David Mullen puts it: the spot meter which can be moved,
“Finally, the old saying ‘Know thyself ’ can be applied to exposure tech- allowing the user to select what part of
the scene is being exposed for (Cour-
nique; we know if we have a tendency to overexpose or underexpose in tesy Adam Wilt)
general so we can develop a technique to take that into account.” As you
go through the feedback cycle of using your meters, waveform, zebras,
or whatever it is you use, then setting exposure and viewing the results in
dailies, you need to watch what works and what doesn’t. Do you have a
tendency to overexpose night scenes? Underexpose day scenes? Whatever
it is you do, learn from it and bring that self-knowledge back to the set
with you next time you shoot. Perhaps you consistently interpret your
incident meter in a particular way and tend to set exposure by the wave-
form in a slightly diferent way. Despite the accuracy of the tools, it’s
never a straightforward, mechanical process; there is always an element of
human judgment involved.
BLACKMAGIC CAMERA EXPOSURE ADVICE
Blackmagic Design, the maker of DaVinci Resolve and video cameras, ofers
the following view on exposure with their cameras: “Why do my RAW
shots look overexposed? Answer: The 100% Zebra level in the Display
Settings helps you adjust your exposure to ensure that you don’t overload
the sensor and clip your highlights. It is based on the full dynamic range
capability of the Blackmagic Cinema Camera and not on video levels. A
quick way to ensure you do not allow the sensor to clip any image data is
to set your Zebra level to 100%, expose your shot such that zebras start to
appear and then back it of until the Zebras disappear. If you have an auto
iris lens on the camera, pressing the IRIS button will do this automatically
for you by adjusting the lens aperture to keep the white peaks just below
the sensor’s clipping point.
“If you normally expose your shots based on an 18% gray card at 40
IRE video levels, then your log images will look correct when imported
into DaVinci Resolve. However, if you want to maximize your camera
sensor’s signal to noise ratio, you might expose your shots so the white
peaks are just below the sensor clipping point. This may cause your log
images to look overexposed when a video curve is applied to the preview,
exposure..161.
Figure 8.39. (above) The Cavi-Black is
an important and useful feature of the
ChromaDuMonde It is an open hole in
the middle of the chart which is backed
by a velvet-lined foldout box on the
back of the chart It’s a complete light
trap and provides a reliable pure black
reference in the frame, an invaluable
tool for testing and camera setup
Figure 8.40. (right, top) An example of
the Expose to the Right method on the
histogram The chart looks blown out
and desaturated, but in fact, nothing is
clipping so the values can be brought
back in color correction The risk is obvi-
ous, even a slight increase in exposure
will cause some values to clip
and the highlights you thought were safe will look as though they have
Figure 8.41. (right, bottom) The same been clipped. This is normal and all the details are still retained in the fle.
Expose to the Right frame as shown on If there is a lot of contrast range in the shot, the log images may look fne
the waveform While the high values are
dangerously close to clipping, all of the and not overexposed.
darker values have been pushed up the “Shooting in RAW/log captures a very wide dynamic range. However,
scale, well away from noise
you might only see the image in the more limited Video range (Rec.709)
when you open the CinemaDNG fles in a compatible application. If
the camera is not exposed based on 18% or other video related exposure
guides, the RAW fles will look over or under exposed depending on the
dynamic range of the scene. The good news is that you have not really lost
any information in your shots. Based on the contrast range of your shot,
you can creatively adjust the exposure settings of the DNG fle for the
look you want using software such as DaVinci Resolve, Adobe Photoshop
or Adobe Lightroom. To recover the highlights not displayed in Resolve,
use the RAW image settings and adjust the Exposure values so the details
you need ft within the video range. Exposing your shot to the point just
before the sensor clips, ensures you are getting the best signal to noise ratio
for the maximum fexibility during postproduction.”
162..cinematography:.theory.and.practice.
09
linear, gamma, log
Figure 9.1. Comparison of RAW data,
a log encoded image, and the frame in
Rec709
DYNAMIC RANGE
Brightness range, dynamic range, and luminance range are terms for the same
concept: how much variation there is in the luminance levels of a scene
and then how accurately the imaging system is able to reproduce those
levels. Any scene we look at has a specifc brightness range, which is a
result of the combination of how much light is falling on the scene and
how refective the objects are. Also, some things in the scene don’t refect
light so much as they generate their own light: lamps, a fre, windows, the
sky, etc. The eye can perceive an enormous range of light values: about 20
stops. It does this in two ways: frst the iris opens and closes just like the
aperture in a lens. It also changes its response by switching from photopic
vision (the cones, better when there is lots of light) to scotopic vision (the
rods, which are good in low light situation). But at any one time (bar-
ring changes in the eye) we can see a range of about 6.5 stops (there is not
much scientifc agreement on this). How these factors interact to create the
dynamic range of human vision is shown in Figure 9.3.
The diference between the darkest part of a scene and the brightest part
can be enormous, especially in exterior scenes. If there is something dark
and in shadows in the scene and then there are brightly lit clouds, the dif-
ference can easily be as much as 20 stops or more. Twenty stops is a ratio
of 1,000,000:1. As with all scenes, how refective the objects are is a key
factor in addition to the diferent amounts of light falling on areas of the
scene—think of a black panther in a cave and a white marble statue in full
sun, in the same frame. Even with the amazing range the human eye is
capable of, there is no way you can really see both at the same time. You
can shield your eyes from the glare of the marble statue and let them adapt
to the dark and see the panther in the mouth of the cave or you can squint
and let your eyes adapt to the light and see the marble statue but you’ll
never be able to see both of them at the same time.
Film and video have dynamic ranges that are limited by technology.
Color scientist Charles Poynton states “Dynamic Range according to the
defnition used by sensor designers, is the ratio of exposure at sensor satu-
ration down to exposure where noise flls the entire lowest stop. Consider
whether that sort of dynamic range is a useful metric for you.” In other
words, your mileage may vary.
Until recently video was far more limited than flm, but new cameras are
rapidly closing that gap, even exceeding that range. Traditional HD video
(up until the introduction of the Viper, Red One Genesis, Alexa and now
many others) was limited in range. The problem is illustrated in Figure
9.2—Worst Case Scenario With HD Video. The dynamic range of the scene
may be quite large. However the camera can only pick up a limited por-
tion of that brightness range (we’re assuming that the camera exposure
was set right in the middle). This means that something that was just very
light gray in the scene will be recorded by the camera as being pure white,
164..cinematography:.theory.and.practice.
Figure 9.2. Because the camera and
Worst Case Scenario With HD Video then the monitor are not always capable
of reproducing the extreme dynamic
This becomes “black” This becomes “white” range of some scenes, the tonal range
in the scene you see in the scene you see is represented diferently in each step
on a monitor on a monitor of the process Fortunately there have
been huge strides in making cameras
that have amazing dynamic range
Dynamic range of the scene Figure 9.3. Human vision has an
extremely wide dynamic range but this
can be a somewhat deceptive mea-
surement It’s not “all at once,” the eye
adapts by altering the iris (the f/stop)
and by chemically adapting from phot-
opic (normal lighting condition) to sco-
topic vision (dark conditions) The actual
These tones don’t Dynamic range the camera can capture These tones don’t instantaneous range is much smaller
get recorded get recorded and moves up and down the scale
Not Not
displayed Range a particular monitor can display displayed
because that brightness value is at the top end of what the camera can
“see.” At the low end, something that is just dark gray in the scene will be
recorded by the camera as being pure black.
The same process is repeated when the recorded images are displayed on
a monitor that has even less range than the camera: image information is
lost at both ends of the scale. In old fashioned analog video, especially in
early TV, the cameras had ridiculously limited dynamic range. As a result
old TV studios were lit with very “fat” lighting: no deep shadows or
even slightly bright highlights were allowed as they would simply turn to
harshly glaring highlights and deep murky shadows. Even though it was
a classic age in terms of the programs, the cinematography was uniformly
linear,.gamma,.log..165.
10 bland and dull—it was the only way to deal with the limitations of the
9 technology. These days, of course, with the newest cameras, there is some
truly outstanding cinematography being done on programs produced for
8
cable TV. The same problem happened to flm when color came in: Tech-
7 nicolor flm had a limited dynamic range and it also required huge amounts
6 of light due to the chemistry of the emulsion but also because the image
5
was being split by prisms and sent to three separate pieces of flm, resulting
in severe loss of light. There was some outstanding cinematography done
4
in the Technicolor era but for the most part it was marked by bland, dull
3 low-contrast lighting, much of it “ fat front lighting” which is something
2 DPs avoid today. This was a huge step down from the black-and-white era,
1
of course. The same cinematographers who had produced stunning, high
contrast, moody images in black-and-white were now working in color—
0
1 2 3 4 5 6 7 8 9 10 it’s not that they didn’t know how to do great lighting, of course they did,
it was just the limitations of the new medium that forced them into it.
Figure 9.4. Linear response on a Carte-
sian diagram: every increment on the LINEAR RESPONSE
X-axis results in an equal change on the
Y-axis If an “ideal” flm was truly linear, it would have an equal increase in den-
sity for every increase in exposure: doubling the amount of light in the
scene would exactly double the brightness of the fnal image. The problem
is that linear response means some brightness ranges exceed the limits of
the flm (Figure 9.4)—parts of the scene that are too bright just don’t get
recorded: they come out as pure, featureless white on the flm (in video,
we call this clipping): no detail, no texture, no separation. The same hap-
pens with the very dark parts of the scene: they are just a featureless dark
blob on the negative. Instead of subtle shadows and gradations of black
tones, it’s just pure black with no texture. Simply put, since the brightness
range of many scenes exceeds what cameras and displays are capable of,
a purely linear response puts some areas of the scene of the scale at the
extremes. Even with the most advanced new cameras, this will always be
a problem in cinematography; not even the human eye can accommodate
the brightness range of the real world without adaptation.
AN IDEAL AND A PROBLEM
You may have heard that video cameras are “linear,” meaning that they
have that one-to-one correspondence between the input (scene brightness
values) and the output (pixel brightness values). It just means that there is
no alteration of the data in the transition.
At frst glance this seems ideal—for every change of brightness levels
within the scene, there is a corresponding change in the output levels from
the photosites. Sound great, after all, accurate reproduction of the scene
is what we’re after, right? If only life was so simple, everything would be
easy.
Because the brightness range of the real world is often huge, no cur-
rent sensor, monitor or projector can accommodate that great a bright-
ness range. The human eye can accommodate a ratio of 100:1 under static
conditions (without adaptation with the iris or changing chemically from
scotopic to photopic or vice versa). So we have certain hard limits to what
brightness range we can record and use—as cameras and displays evolve,
in the end it will not be the equipment that is the limiting factor, it will
be human vision. In regards to a sensor, we call that upper limit clipping, as
we talked about previously; camera manufacturers call it full well capacity or
sensor saturation. Full well capacity simply means that each photosite (well)
has absorbed as many photons as it can take. Our starting point is pure
black, 0 IRE and the dynamic range of the sensor is measured between
black and the upper limit of clipping. With the best new cameras, this
can be quite an impressive range of up to 14, 15 stops or more, but even
so, many scenes, especially exteriors with bright sunlight and deep shade
exceed this dynamic range. This means that there are many instances
where a linear recording of a scene exceeds the dynamic range that sensor/
recording system (or indeed, the eye) is capable of.
166..cinematography:.theory.and.practice.
Brightness range
The S-Curve E˜ect not recorded
White
The highlight values of the
scene roll o˜ gradually
se
on
esp
rR
ea
Lin
Brightness range
Image Brightness
not recorded
linear,.gamma,.log..167.
Figure 9.7. An illustration of how the
S-curve “saves” the extreme shadows
and highlights
Bright parts of the
scene that are “saved”
linear,.gamma,.log..169.
Figure 9.11. Gamma correction in tradi-
tional video—the gamma curves of the
camera and the monitor are the inverse
of each other and thus cancel out
THE COINCIDENCE
Human vision perceives brightness in a non-linear fashion; it works out to
a gamma of about .42, which is, by amazing coincidence, the inverse of 2.4
(1/2.4=.42). As a result of this, engineers realized very early in the develop-
ment of television that cameras must include something at the front end to
compensate for this; it is called gamma correction (Figure 9.11). Recall that in
the frst decades of television, images went directly from the camera to the
control room, to broadcast towers to people’s televisions. In short, gamma
encoding at the camera is the inverse of the gamma characteristics of CRT
monitors—the two cancel each other out. Modern fat panel displays such
as LED, plasma, LCD, OLED, don’t have this non-linear nature, and CRTs
are no longer even being made, so it would seem that this gamma correc-
tion is no longer needed. However, there are decades worth of gamma
corrected RGB video already in existence, so even the newest displays still
incorporate this correction.
While CRTs came by their gamma naturally due to physical factors, fat
screen displays need a Look Up Table (LUT) to achieve the proper gamma
correction (see Image Control for more on LUTs). There used to be quite
a bit of variation in gamma correction, but there has been a tendency to
standardize on 2.2 for monitors.
REC.709 AND REC. 2020
The term Rec.709 (or BT.709) appears in discussions of HD video all the
time. It comes up in several diferent places in this book because it is not
just one thing: it’s actually a collection of specifcations that defne tradi-
tional High Defnition video up to 1080. Basically modern video as we
know it now came along with cameras like the Viper, Red, Genesis, Alexa
and all the others that created Ultra High Def. The Rec.709 specifcations
include a color space, a gamma curve, aspect ratios, frame rates and many
engineering details we don’t need to get into here.
The ofcial name is ITU-R Recommendation BT.709 but most often it is
referred to as Rec.709 but you’ll also see it Rec 709, Rec709 and Rec. 709.
ITU is the International Television Union which is sort of the United Nations’
version of SMPTE (Society of Motion Picture & Television Engineers) in the
US, EBU (European Broadcast Union) or ARIB (Association of Radio Industries
and Businesses) in Japan, all of which are organizations that create standards
so that everybody’s video signal plays nice with all the rest. It has been
ofcially adopted as a standard so technically it is no longer a “rec,” but it
is unlikely that the terminology in general use will change anytime soon.
170..cinematography:.theory.and.practice.
Figure 9.12. The Rec 709 transfer func-
tion is mostly a power function curve
but there is a small linear portion at the
bottom
Don’t panic! It’s not as complicated as it seems. All it means is that response
below 0.018 is linear: a straight line of slope 9.9. Above 0.018, the response
is a power function with the exponent 0.45. By the way, in case you were
wondering, the 0.018 has nothing to do with 18% gray. Technically,
Rec.709 has only a fve stop range but tweaks such as knee and black
stretch can extend this a bit. Most HD cameras can generate signals that
are “legal” meaning that the video signal levels stay within the range 0%
to 100%. However, each camera company adds their own little twists to
try to conform to Rec.709 and they don’t always produce the same result;
you will hear the term “Rec.709 compliant” meaning “kinda, sorta.” In his
article HDTV Standards: Looking Under The Hood of Rec.709, Andy Ship-
linear,.gamma,.log..171.
sides writes “In other words, monitors conform to the gamma standards
of Rec.709, but cameras generally do not. This is a big reason why setting
two cameras to Rec.709 mode doesn’t guarantee they will look the same.
Diferent gamma means diferent contrast and dynamic range. Contrast is
just half the equation, of course; the other half is color.” We’ll discuss this
aspect of Rec.709 in Digital Color. The bottom line is that in the newer
cameras that shoot RAW video, viewing the images in Rec.709 is just
an approximation of the actual image. This does not mean that Rec.709
video no longer has a place in shooting. It is often used when video has to
be output quickly for near immediate use with little or no grading. One
might ask, what is the linear portion at the bottom for? Poynton explains
it this way, “A true power function requires infnite gain near black, which
would introduce a large amount of noise in the dark regions of the image.”
By infnite gain, he means that the power function curve becomes vertical
Figure 9.13. Output options on the near the origin.
Arri Alexa Legal keeps everything in the Some cameras have a Rec.709 output either for viewing or for recording
range of 0-100% Extended ranges from
-9% to 109% Raw, of course, outputs when little or no post-production color correction is anticipated. These
everything are usually not “true” Rec.709 but are designed to look reasonably good
on a Rec.709 display. Arri puts it this way: “Material recorded in Rec.709
has a display specifc encoding or, in other words, ‘what you see is what
you get’ characteristics. The purpose of a display specifc encoding is to
immediately provide a visually correct representation of the camera mate-
rial, when it is screened on a certain display device. This is achieved by
mapping the actual contrast range of the scene into the contrast range that
a display device can reproduce.” Arri also ofers their Rec.709 Low Con-
trast Curve (LCC), which they explain like this, “To enable productions to
shoot in Rec.709 color space without the sacrifce of too much highlight
information, Arri provides a special Low Contrast Characteristic (LCC) Arri
Look File that can be applied to change the standard Rec.709 output.”
REC. 2020
With UHD video, a new standard has come into play—Rec. 2020. It
defnes various aspects of ultra-high-defnition (UHD) with standard
dynamic range and wide color gamut, including picture resolutions, frame
rates with progressive scan, bit depths, color primaries.
Rec. 2020 defnes two resolutions of 3840×2160 (4K) and 7680×4320
(8K). These resolutions have an aspect ratio of 16:9 and use square pixels.
Rec. 2020 specifes the frame rates: 120p, 119.88p, 100p, 60p, 59.94p,
50p, 30p, 29.97p, 25p, 24p, 23.976p. It defnes a bit depth of either 10 bits
per sample or 12 bits per sample.
STUDIO SWING LEVELS, FULL RANGE AND LEGAL VIDEO
Rec.709 also incorporates what are called legal levels; also known as studio
swing or video range. Range or swing in this context really means excursion, as
in how the signal level travels between reference black and reference white. In
8-bit video, the minimum code value is 0 and the maximum is 255. You
would think that 0 represents pure black and 255 represents pure white. In
legal levels, video range or studio-swing, however, black is placed at code value
16 and reference white is at 235 (64-940 in 10 bit). Code values from 0-16
and from 236-255 are reserved as footroom and headroom. When this system
is not used and the video signal uses all code values from 0-255 (0-1023 in
10 bit) it is called full swing or extended range. Arri’s version of Extended—it
goes from -9% (IRE) up to 109% (IRE). We’ll be coming back to the con-
cept of headroom and footroom soon.
An example of this general idea can be seen on the menu of an Arri Alexa
(Figure 9.13). It ofers three options for Output Range: Legal, Extended, and
RAW. The Legal setting will output video at 0-100% and Extended goes
from -9% up to 109%. Extended can, however, push your video into Illegal
ranges on the waveform. As another example, DaVinci Resolve has a LUT
called Legal Output Levels.
172..cinematography:.theory.and.practice.
THE CODE 100 PROBLEM
There is another consequence of the non-linear nature of human vision. It
is called the Code 100 problem and we will fnd that it has huge implications
for digital video. Scientists who study perception rely on measurements
of the Just Noticeable Diference (JND)—the smallest change in input levels
that an average person can detect. These perceptual studies are based on
averages of many observers. Most human modes of perception are loga-
rithmic or exponential in nature: our ability to sense changes in perception
changes as they become more extreme (Figure 9.14).
This applies not only to brightness levels but also to sound level, pres-
sure, heaviness, pain and others. Let’s take a simple example: weight.
Anyone can sense the diference between a one pound weight and a two
pound weight. On the other hand, not even the guy who guesses your
weight at the state fair can detect the diference between a 100 pound
weight and a 101 pound weight. In both examples, the diference is the
same: one pound, but the percentage change is vastly diferent: from one
to two pounds, the diference is 100%. The diference between 100 and
101 is only 1%.
Whether our perception of brightness it is logarithmic or exponential Figure 9.14. The Code 100 problem in
in response is a matter of debate among vision scientists. In practice, it digital video in an 8-bit system Because
doesn’t make much diference—our perception of light levels is not linear. human perception of the Just Notice-
able Diference in lightness is a ratio
(percentage) rather than an absolute
THE ONE PER CENT SOLUTION value the number of digital code values
The human visual system can perceive a one per cent diference in bright- needed to efciently portray the gray
scale is not uniform as we move up and
ness value; this is the Just Noticeable Diference. Let’s say we’re dealing with down the brightness scale
8-bit material, so the code values go from 0-to-255 (256 code values total).
Figure 9.14 shows the problem—in the darkest part of the image (lower
code values) the diference between, for example, CV (code value) 20 and 21
is 5%—far more than the minimum discernible diference. The result of
this is that there are very big jumps in brightness from one code value to
the next. This leads to banding, (which is also known as contouring) in the
shadows (Figure 2.7 in Digital Image).
At the brighter end of the scale, the diferences can be much smaller, such
as only a 0.5% diference between CV 200 and 201—much smaller than
is perceptible. This means that a lot of these code values are wasted. If the
visual diference between two code values is not perceivable, then one of
them is unnecessary and it is a waste of space in the fle and in the data
storage, which becomes a real issue when shooting RAW, and especially
in 3D, high speed, multi-camera or other situations where a great deal of
video is recorded.
There are two ways to deal with this issue. One of them is to have lots of
bits at the sensor/processor level; this is how DSLRs deal with it. Another
way is to have the spaces between the levels be unequal—putting the light
values/code values at the bottom end closer together and making them
farther apart at the top end; as we will see, the steps are made unequal in
spacing but are still perceptually equal, meaning that the eye still sees them
as equal. This can be accomplished either through gamma encoding (power
function) or through log encoding, which we will discuss in more detail in the
next section as it has become a major factor in digital video. To help us
prepare for the discussion of these techniques of gamma and log encoding,
let’s take a look at the traditional HD controls; which are in fact largely the
same as have been used on video cameras for many decades.
HYPERGAMMA/CINEGAMMA/FILM REC
Camera manufacturers have developed several versions of gamma encod-
ing variously called Hypergamma, Cinegamma, Video Rec, Film Rec or low
contrast curve (depending on the camera), which is designed to extend the
dynamic range of the camera. These gamma curves are usually measured
in a percentage, with the range of Rec. 709 as a base 100%, with typical
settings of 200%, 300%, 400% and so on. The higher the dynamic range,
the fatter the curve and the lower the contrast, which means that color
linear,.gamma,.log..173.
correction will be necessary and it also makes the use of exposure aids such
as zebras difcult or impossible. According to Sony, “HyperGamma is a
set of new transfer functions specifcally designed to maximise the latitude
of the camera, especially in highlights. It works by applying a parabolic
shaped curve to the gamma correction circuit, so that the huge dynamic
range of Sony CCDs can be used in the fnal recorded picture, without the
need for the camera operator to adjust any settings in the camera.”
This approach also means we do not use any knee, thus removing the
non linear part of the transfer characteristic as HyperGamma is a totally
smooth curve. This means you remove any traditional issues which occur
because of the non-linearity, especially in skin tones, and improve the
dynamic range in one step. “On lower end Sony cameras there are four
HyperGamma curves as standard, two of which are optimized for 100%
white clip in a TV workfow (HG 1 and 2), and two for the 109% white
clip generally used in traditional flm style workfow (HG 3 and 4). Having
chosen your white clip point, two curves are available to either optimize
for maximum highlight handling (HG 2 and 4) or for low light conditions
(HG 1 and 3).” (Sony, Digital Cinematography With Hypergamma). Higher
end cameras have as many as eight HyperGamma curves (Figure 9.16).
They are diferent than knee adjustments in that the changes to the curve
starts down toward the middle, so they generate a more “holistic” com-
pression of the highlights as opposed to the somewhat unnatural look of
knee and black stretch.
Figure 9.15. (above) A linear encoded
grayscale distributes the tones unevenly SONY HYPERGAMMA TERMINOLOGY
(left) Log encoding distributes the Sony now uses a naming format for hypergamma that includes the range.
tones more evenly (right) The gray scale
steps in this case represent the number For example, HG8009G30 has a dynamic range of 800%, a middle gray
of code values in each stop exposure of 30% and a white clip level of 109%. HG (HyperGamma) 800
(dynamic range), [10]9 white clip level and G30 (middle gray exposure
level at 30%).
Sony HG4609G33 has an extended dynamic range of 460%, a white clip
of 109%, and a middle grey exposure level of 33%. This means that the
name of the HyperGamma actually includes Sony’s recommendation for
exposure: they want you to expose your middle gray at a particular IRE
value (in this case 33 IRE), which will then give you the indicated percent-
age of dynamic range.
GAMMA IN RAW VIDEO
When shooting RAW, gamma is just metadata, you aren’t really changing
the image at all until you bake it in, which you do have to do at some point.
An example of how this is done is how Red cameras handle this. Their
color science has been evolving since the introduction of the frst Red One
and ofers several selections: RedGamma 2, RedGamma 3 and a log version:
Redlogflm. These are diferent look profles for viewing on set and for con-
version as the RAW fles are imported to editing or color software; as with
all metadata, you’re not stuck with them, but they do ofer a starting point
for the fnal look and guidance for the postproduction process.
THE INEFFICIENCY OF LINEAR
In addition to potentially losing data at the top and bottom of the curve,
linear video reproduction has another problem—it is extremely inefcient
in how it uses bits per stop. Table 9.1 shows how Art Adams calculates the
output of a 14 bit sensor, which has 16,384 bits per channel. It shows the
code values in terms of f/stops. Recall that every f/stop is a doubling of
the previous value (or half the previous value if you’re going down the
scale)—the inefciency is obvious in the table and the problem is clear—
the four stops at the top end of the scale (the highlights) use up 15,356 bits,
most of the available bits!
As Adams puts in his article The Not-So-Technical Guide to S-Log and Log
Gamma Curves (Pro Video Coalition) “As you can see, the frst four stops of
dynamic range get an enormous number of storage bits—and that’s just
174..cinematography:.theory.and.practice.
Figure 9.16. (left, top) Sony’s Hyper-
Gamma curves as compared to video
with the Rec 709 curve applied which
here they call standard—the yellow line
This chart uses Sony’s older method of
naming gamma curves; see the text for
their new system (Courtesy of Sony)
Figure 9.17. (left, below) Panasonic’s
Film Rec curves They are denoted as the
percentage by which they extend the
dynamic range
about when we hit middle gray. This is the origin of the ‘expose to the
right’ school of thought for ‘RAW’ cameras: if you expose to the right
of the histogram you are cramming as much information as possible into
those upper few stops that contain the most steps of brightness. As we get
toward the bottom of the dynamic range there are fewer steps to record
each change in brightness, and we’re also a lot closer to the noise foor.”
There’s another problem. Experiments have shown that around 60 to 70
code values per stop is ideal. In this example, many stops at the bottom
have less than this and stops at the top have much more.
Figure 9.18 is a diagram devised by Steve Shaw of Light Illusion which
illustrates this problem. The top part of the fgure shows that not only are
there an excessive number of bits used up in the highlights, but also the
divisions are too small for the human eye to perceive and so are wasted.
LOG ENCODING
Fortunately, there is a solution to this inefciency: log encoding (Table 9.1).
It is similar in concept to gamma in that it reduces the slope of the response
curve in order to extend the dynamic range to stretch the brightness
values that can be captured and recorded without clipping. The diference
between the two is self-evident: instead of applying a power function to the
curve, it uses a logarithmic curve.
linear,.gamma,.log..175.
Table 9.1. (right) Based on a 14-bit
sensor, this chart shows that the frst Value Code Range Total # Values
few stops down from maximum white Max White 16,384
hog most of the available code values,
while the darkest part of the image One Stop Down 8,192-16,383 8,191
contain so few code values that they
are not able to accurately depict the Two Stops Down 4,096-8,191 4,095
subtle gradations of one—resulting
in banding in the image Experiments Three Stops Down 2,048-4,095 2,047
have shown that around 60 to 70 code
values are needed per stop for proper Four Stops Down 1,024-2,047 1023
representation of the image Derived
from calculations made by Art Adams Five Stops Down 512-1,023 511
Six Stops Down 256-511 255
Seven Stops Down 128-255 127
Eight Stops Down 64-127 63
Nine Stops Down 32-63 31
Ten Stops Down 16-31 15
Eleven Stops Down 9-15 6
Twelve Stops Down 5-8 3
Thirteen Stops Down 3-4 1
Fourteen Stops Down 1-2 1
A key element of log scales is that the spacing of the values is not even,
as we see here along the vertical Y-Axis. A lot of problems are solved
with this simple mathematical translation. Log curves and power function
curves look somewhat alike when graphed and they do operate in simi-
lar ways, but there are mathematical and “behavioral” diferences (such as
shown in Figure 9.3) which can be used for specifc purposes.
BRIEF HISTORY OF LOG
So where does this idea of log encoding for video come from? It’s origin
was a Kodak project in the 1990s which was a system for converting flm
to video. Although it was a complete system with a scanner, worksta-
tions and a laser flm recorder, it was the fle format that has had a last-
ing infuence on image production. The system and the fle format were
called Cineon. Kodak’s engineering team decided that a flm image could
be entirely captured in a 10 bit log fle. It was intended as a digital inter-
mediate (DI) format, not one to be used for delivery, CGI or anything else.
As it was about origination on flm and ultimately for printing on flm,
the entire system was referenced to flm density numbers—which is the key
value needed to understand and transform flm negative. We’ll talk about
Cineon in more detail in DIT & Workfow.
SUPERWHITE
Film and video have a crucial diference. In 8-bit video computer graphics,
“pure white” is all channels at maximum: 255, 255, 255 but in the flm
world, what we might call pure white is just a representation of “difuse
white” or brightness of a piece of illuminated white paper (about 90%
refectance). Because of the shoulder of the S-curve, flm is actually capa-
ble of representing many values of white much brighter than this. These
are specular highlights, such as a light bulb, a candle fame or the hot refec-
tion of sun of a car bumper, for example.
If we stuck with 0-to-255, then all the “normal” tones would have to
be pushed way down the scale to make room for these highlights that are
above difuse white. Kodak engineers decided on a 10-bit system (code
values from 0-to-1023) and they placed difuse white at 685 and black at
95—just as legal video goes from 16-235 (8-bit) 64-940 (10-bit) or -9% to
109%. The values above reference white are thus allowed for and are called
superwhite. Similarly, there are code value levels below Reference Black that
allow some footroom, in the digital signal.
176..cinematography:.theory.and.practice.
WHAT YOU SEE IS NOT WHAT YOU GET Figure 9.18. Linear sampling wastes
One important aspect of log encoded video is that it is not an accurate rep- codes values where they do little good
(upper diagram), while log encoding
resentation of the scene when viewed on a monitor—this is both its strong (lower diagram) distributes them more
point and a limitation. Since it is essentially compressing scene values in evenly—in a way that more closely con-
forms to how human perception works
order to get them to ft into the recording fle structure, a log encoded on a Based on a diagram by Steve Shaw of
monitor will look washed out, pale and low contrast (Figure 9.23). While Light Illusion
most cinematographers can adapt to viewing this, it tends to freak out
directors and producers and may not be especially useful for other crew
members as well. It can also make lighting decisions more difcult and
afect other departments such as set design, wardrobe and makeup. The
fact that log encoded images are not really “ready for prime time,” means
that the image must be color corrected at some later time, certainly at the
time of fnal grading (Figure 9.24). In the meantime, it is often useful to
have some sort of temporary correction for viewing purposes.
There are good reasons to have an accurate view on the set monitors—or
at least one that is close to how the scene will appear. This is usually in
the form of a LUT (Look Up Table) which we’ll talk about in Image Control.
Rec.709 is WYSIWYG because it is display-referred, meaning it is set up
to be compatible with most monitors and projectors; for this reason, just
doing a quick, non-permanent conversion to Rec.709 is output for use on
the set. Some cameras have a Rec.709 monitor viewing output just for this
purpose—it has no efect on the recorded fles. There are other viewing
modes besides Rec.709, some of them custom built for the project. It is
also important to remember that viewing the scene in log mode, you may
linear,.gamma,.log..177.
Figure 9.19. Gamma curves and log not be able to make accurate exposure decisions, particularly with tools
curves are diferent mathematically but
are somewhat similar in their efect such as zebras or false color. There are diferent ways to deal with this as
on video levels in the recorded image was discussed in the chapter Exposure.
Both, however, are dramatically dif-
ferent from a purely linear response
Remember that gamma curves created LOG AND RAW—TWO DIFFERENT THINGS
by camera manufacturers are rarely a Log and RAW are two diferent things; however, many cameras that shoot
simple exponent function; most of them
have a secret sauce, in some cases, very RAW actually record the data log encoded—in most high end cameras,
secret
the sensors produce more data (higher bit depth) than is feasible to record
using current technology. Just keep in mind that you can record RAW
that isn’t log or log that isn’t RAW, but for the most part, they go hand-
in-hand.
Some cameras do have the ability to record RAW uncompressed; how-
ever this can be likened to drinking from a fre hose—while there may be
important reasons for recording uncompressed RAW data, it is important
to understand the implications of the torrents of data that will be gen-
erated. As with a frehose—you better be sure you’re really that thirsty.
Other recording options include Rec.709 or P3, which is a wide gamut
color space that is part of the DCI (Digital Cinema Initiative) and so is an
industry wide standard.
178..cinematography:.theory.and.practice.
Figure 9.20. (above) How log encoding
provides extra dynamic range
Figure 9.21. (left) The “correctly
exposed” image values occupy only a
part of the entire range of Cineon code
values Reference Black is placed at CV
95 and Reference White is at CV 685 thus
allowing headroom for superwhite and
footroom for sub-black values
linear,.gamma,.log..179.
Figure 9.22. Rec.709 and Cineon By
placing reference white at a lower code
value, extra headroom is allowed for
highlights and specular refections The
same is true at the bottom where code
value 64 is taken as pure black, allowing
for some footroom below that value
problem by storing shadow values farther down the curve so that the dark-
est tone the camera can possibly see is mapped very close to the log curve’s
minimum recordable value. By making blacks black again the log image
Figure 9.23. (opposite page, top) A Red actually looks fairly normal on a Rec.709 monitor. The highlights are still
camera shot displayed in RedLogFilm—
a log space The image is dull and low crushed because they are heavily compressed by the log curve for grading,
contrast by design Note the waveforms but the image otherwise looks ‘real.’ It’s less likely to cause people to panic
in upper center: nothing reaches 0%
at the bottom or 100% at the top; they when walking by the DIT’s monitor.” Sony says “S-Log3 is a log signal
don’t even come close This also shows with 1300% dynamic range, close to Cineon Log curve. This does not
in the histogram in the upper left As replace S-Log2 and S-Gamut3. It is added as another choice.”
shown here, the log image is not really
viewable insofar as it is not an accurate As you will see from the following brief descriptions of various camera
representation of the scene—it is not companies approach to log, they start from diferent assumptions about
intended to be
what is important in imaging and how best to achieve that. It is impor-
Figure 9.24. (opposite page, bottom) tant to understand these diferences and how best to use them as you use
The same shot with RedColor and with them on the set—it is not unlike testing, evaluating and using diferent
RedGamma3 LUT applied This LUT
is Rec 709 compatible, meaning it is flm stocks. Table 9.2 shows the IRE and code values of S-Log1, S-Log2,
designed to look good on a Rec 709 and S-Log3. Keep in mind that in HD /UHD, IRE and % on the wave-
display Notice especially the diference
in the parade waveforms This is a prob- form are the same measurement, although % is the technically correct
lem as it means that the log image is designation.
not only dull and low-contrast but also
the waveform monitor and vectorscope 0% Black 18% Gray 90% White
are no longer accurate as relates to the
actual scene values IRE 10-Bit CV IRE 10-bit CV IRE 10-bit CV
Table 9.2. (right) Black, middle gray, S-Log1 3% 90 38% 394 65% 636
and 90% white fgures for S-Log1 and S-Log2 3% 90 32% 347 59% 582
S-Log2
180..cinematography:.theory.and.practice.
Figure 9.25. (top) A frame displayed in
0% Black 18% Gray 90% White RedLogFilm—a log space is not really
IRE 10-Bit CV IRE 10-bit CV IRE 10-bit CV viewable insofar as it is not an accurate
representation of the scene—it is not
S-Log3 3 5% 95 41% 420 61% 598 intended to be
Figure 9.26. (above) The same shot
SONY S-GAMUT with RedColor and with RedGamma3
LUT applied in Red Cine-x Pro
S-Gamut is a color space designed by Sony, a leader in high-def cameras
from the very beginning, specifcally to be used with S-Log, which is their
version of logarithmic recording. Since for a number of technical reasons
an S-Gamut conversion to Rec.709 can be a bit tricky, Sony has come up Table 9.3. (left) Sony S-Log3
linear,.gamma,.log..181.
Figure 9.27. Sony’s curve for S-Log
as compared to Rec 709 (Courtesy of
Sony)
Output
Normal 1
Canon Log
Input
CANON-LOG
Commonly referred to as C-Log (Canon-Log), it behaves in slightly dif-
ferent ways depending on the ISO setting (Figure 9.30). “As a result of its
operation, exposure latitude/dynamic range can be extended up to a maxi-
mum of 800% by raising the camera’s Master Gain setting. Canon states
that this will yield a still acceptable noise level of 54 dB.” (Larry Thorpe,
Canon-Log Transfer Characteristics.) As previously mentioned Canon takes
a diferent approach to its RAW encoding in that it is partially baked in,
meaning the color channels have gain added to them in order to achieve
the correct color balance for the scene and then this is recorded as baked
in data.
PANALOG
Panavision’s chief digital technologist John Galt, in the white paper Panalog
Explained, summarizes: “Panalog is a perceptually uniform transfer charac-
teristic that internally transforms the 14-bit per color linear output of the
Genesis A/D converters into a quasi-logarithmic 10-bit per color signal
that enables the RGB camera signal to be recorded on 10-bit recorders.”
REDCODE
Red cameras record using Redcode RAW codec, which has the fle exten-
sion .R3D and is, as the name states, a RAW format. It is a variable bit rate
lossy (but visually lossless) wavelet code with compression ratios selectable
from 3:1 to 18:1.
As a wrapper format, it is similar to Adobe’s CinemaDNG. Because it is
a wavelet codec, as are CineForm RAW and JPEG 2000, the artifacts which
may be the result of lossy compression are not like the “blocking” that
occurs with heavy JPEG compression.
RedCode RAW stores each of the sensor’s color channels separately prior
to conversion to a full color image. This brings advantages in control over
color balance, exposure and grading in post-production. As with other
RAW formats, these adjustments are stored as metadata until they are
baked in at some point; this means that adjustments are non-destructive.
RED LOG
Like all camera manufacturers, Red has constantly sought to improve their
color space, log and gamma and they have gone through several genera-
tions of their software/frmware for Red cameras. This is from the com-
pany: “REDlog is a log encoding that maps the original 12-bit R3D camera
184..cinematography:.theory.and.practice.
Table 9.4. Code and waveform values
Canon C-Log for Canon C-Log
Image Brightness 8-bit CV 10-bit CV Waveform
0%—Pure Black 32 128 7.3%
2% Black 42 169 12%
18% Middle Gray 88 351 39%
90% 2.25 Stops Over Middle Gray 153 614 63%
Maximum Brightness 254 1016 108.67%
data to a 10-bit curve. The blacks and midtones in the lowest 8 bits of
the video signal maintain the same precision as in the original 12-bit data,
while the highlights in the highest 4 bits are compressed. While reducing
the precision of highlight detail, the trade of is that there’s an abundance
of precision throughout the rest of the signal which helps maintain maxi-
mum latitude.
REDlogFilm is a log encoding that’s designed to remap the original 12-bit
camera data to the standard Cineon curve. This setting produces very fat-
contrast image data that preserves image detail with a wide latitude for
adjustment, and is compatible with log workfows intended for flm out.”
18% GRAY IN LOG
If you ever studied still photography, you learned that 18% gray (middle
gray) is half way between black and white and that light meters are cali-
brated to 18% gray as being the average exposure of a typical scene. Now
why is middle gray 18% and not 50%? Because human perceptual vision is
logarithmic, not linear. The Kodak 18% gray card is probably one of the
most widely used and trusted tools in photography and flm shooting. So
we can depend on the gray card, light meters and expect that 18% gray will
read as exactly 50% on the waveform monitor, right?
Not exactly, except for one part: 18% refectance (actually it’s 17.5%) is
middle gray perceptually, but it turns out that incident meters are calibrated
according to ANSI (American National Standards Institute) at 12.5% (with
some small variations between diferent manufacturers). Refectance (spot)
meters are calibrated at around 17.6%, again with variation among manu-
facturers. Even Kodak acknowledges this in their instructions packaged
with the Kodak Gray Card: “Meter readings of the gray card should be
adjusted as follows: 1) For subjects of normal refectance increase the indi-
cated exposure by 1/2 stop.” The instructions have since been updated to:
“Place the card close to and in front of the subject, aimed halfway between
the main light and the camera.” Due to the cosine efect, holding the card
at an angle has the efect of reducing the refectance of the card, which in
efect forces the meter to increase the exposure by roughly 1/2 stop. (For
more on the cosine efect in lighting, see Motion Picture and Video Lighting
by the same author.)
How middle gray reads on the waveform monitor is a more complex
topic, but an important one to understand. Despite some lack of precision,
middle gray/18% is still a very important part of judging exposure and
testing cameras and gray cards in particular are widely used and trusted.
In Rec.709, the transfer function placed middle gray at 40.9% (gener-
ally rounded of to 41%). It assumes a theoretical 100% refectance for
“white”—which is placed at 100 IRE (100% on the waveform monitor)
but as we know, that isn’t possible in the real world, where the 90% refec-
tance is roughly the high end. If you put a 90% refectance (such as a Kodak
white card) at 100% on the WFM, 18% gray will wind up at 43.4%, OK
call it 43%. Red values are higher? Alexa values are lower? Neither Red’s
709 nor Alexa’s 709 curves precisely follow the Rec.709 transfer function.
In neither case can we assume that 18% gray hits 41% when we set the
camera up to its nominally correct exposure level. The same is likely to be
true for any camera with a non-Rec.709 curve.
linear,.gamma,.log..185.
Figure 9.31. Relative values for 90% dif- In both the Red and the Alexa cases the recommended values are those the
fuse white and middle gray of Rec.709, manufacturer says will render the optimum exposure for the way they’ve
Cineon, SLog, LogC, S-log 3, and S-Log 2
The values for where 90% difuse white set up their cameras to work. Red’s FLUT adds considerable S-curving,
(the red dotted line) is placed change both in shoulder and toe in flm terms, and does interesting and seem-
as much do the values for 18% middle
gray Values are show in IRE and Code ingly appropriate things in the midtones, but strict 709 it ain’t. Likewise
Values (CV) Alexa, which rolls of from the 709 curve quite early on—which gives
that delightful tonal scale rendering, but again it isn’t the straight Rec.709
curve. Since log curves aren’t meant to be WYSIWYG, the manufacturer
can place 18% gray wherever they think they get the most dynamic range
out of the curves. More often than not they place it farther down the
curve to increase highlight retention, but this is not always the case.
VARIATION IN LOG CURVES
When it comes to log encoded video, however, all bets are of as far as
white point and middle gray—the engineers at camera manufacturers have
made considered decisions about what seems to work best for their sensors
and their viewpoint about what is the “best” image data to be recorded.
The point of log is to push the highlight values down to where they can be
safely recorded, leave room for specular highlights and bring the darkest
values up above noise, and this naturally has an efect on the midtones as
well. Where 18% falls in terms of the waveform monitor and code values
varies according to camera manufacturers as each of them has their own
philosophy of what works best for their cameras. Figures 9.32 through
9.39 show the waveform and code values for middle gray and 90% white
for several log encoding schemes. None of them places 18% middle gray
even close to 50%. Not surprisingly, the white point is much lower than it
is in Rec.709, as this is a goal of log encoding—to preserve the highlights
and make room available for specular highlights/superwhite.
Of course, the log curves from each camera are quite diferent in how
they reproduce the grayscale; none of them puts the brightest white step
at 100% or the pure black at 0%. The sample grayscales, which were cre-
ated by Nick Shaw of Antler Post in London, show fve stops above and
fve stops below middle gray, with a pure black patch in the center; each
step of the scale goes up or down by a full stop. He says that he created
these samples because “many people were not aware that the same level
on a waveform meant something diferent depending on what camera was
being used and what monitoring LUT was loaded.” He adds these addi-
tional notes:
• Each camera’s log curve is quite unique.
• Each camera’s “video” (Rec.709) curve is also unique.
• Sony Rec.709 (800%) is the only camera curve to reach 0%.
186..cinematography:.theory.and.practice.
120
Rec.709
100
75
50
25
120
RedLogFilm/Cineon
100
75
50
25
120
RedGamma3
100
75
50
25
120
Arri 709
100
75
50
25
Figure 9.32. (Top) Rec.709 doesn’t have the dynamic range to represent all eleven steps of this grayscale, as the waveform
shows These computer generated grayscales have a range from fve stops above to fve stops below 18% middle gray (11
stops total), so the right hand white patch is brighter than 90% difuse white as it would be on a physical test chart All of the
grayscales are copyright Nick Shaw of Antler Post in London and are used with his permission
Figure 9.33. (Second down) RedLogFilm/Cineon keeps the black level well above 0% and the brightest white well below 100%
with steps that are fairly evenly distributed
Figure 9.34. (Third down) RedGamma3 shows all the steps but they are not evenly distributed, by design The middle tones get
the most separation as they are where human vision perceives the most detail
Figure 9.35. (Bottom) In Arri 709 the middle tones get the most emphasis while the extremes of the dark and light tones get
much less separation between the steps Black comes very close to 0% on the waveform
linear,.gamma,.log..187.
120
Arri LogC
100
75
50
25
120
S-Log
100
75
50
25
120
S-Log 2
100
75
50
25
120
Sony Rec.709 800%
100
75
50
25
Figure 9.36. (Top) Arri’s LogC is a much fatter curve, with the brightest patch (fve stops above 18%) down at 75% and 18% gray
well below 50% (as is true of all of these curves)
Figure 9.37. (Second down) S-Log from Sony, shows a distribution similar in some ways to Cineon but keeps more separation in
the highlights while crushing the dark tones very slightly more The brightest white patch gets very near 100%
Figure 9.38. (Third down) S-Log2 crushes the black a bit more but still doesn’t let pure black reach 0% and keeps the brightest
white patch well below 100%
Figure 9.39. (Bottom) Sony Rec.709 at 800% actually places the brightest white patch above 100% on the waveform but with
little separation The mid tones get the most separation and the darkest tones are somewhat crushed Unlike “regular” 709,
there is still separation at the high end but like Rec 709, pure black goes very close to 0%
188..cinematography:.theory.and.practice.
10
color
Figure 10.1. Dramatic color makes this Color is both a powerful artistic and storytelling tool and a complex tech-
shot from The Fall especially powerful nical subject. As with most artistic tools, the better you understand the
technical side, the better equipped you will be to use it to serve your cre-
ative purposes. Although we’ll be delving into the science and technology
of color, it is important to never lose sight of the fact that it all comes
down to human perception—the eye/brain combination and how it inter-
prets the light waves that come in is the basis of everything we do in this
area. As for technology, never forget that when it comes to achieving your
artistic and craftsperson goals—anything goes; you are never constrained
by some “techie” requirement, unless, of course, ignoring that technical
aspect is going to interfere with what you fnally want to achieve.
COLOR TERMINOLOGY
As we saw in the chapter Measuring Digital, what we commonly call color is
more properly termed hue. Value is how light or dark a color is and satura-
tion is how “colorful” it is; in video we more commonly call it chroma satu-
ration or just chroma. A desaturated color in everyday terms might be called
a pastel. These terms are the basis for two color models: Hue/Saturation/
Value (HSV) and Hue/Saturation/Lightness (HSL). These systems are widely
used in computer graphics and sometimes show up in applications used in
dealing with video streams or individual frames (such as when adjusting a
frame or frames in Photoshop, for example); they are widely used in color
pickers which are almost always a part of visual software.
Derived from the color wheel, it is easy to conceptualize hue as a circle
and saturation as its distance from neutral white in the center. This is shown
in Figure 10.3; this is a simple visualization of hue, color mixing, and
chroma saturation decreasing to white (no saturation). However, it does
not show value/lightness decreasing to black. This is a pervasive problem
with illustrating and graphing color models since there are usually three
important axes; it is hard to do it on the two-dimensional space of paper.
COLOR TEMPERATURE: THE BALANCES
Even simple consumer cameras can be set to “daylight,” but what is day-
light? It’s not sunlight, which tends to be warmer due to the yellow sun of
Earth; it’s not skylight, which is very blue, it’s a combination of the two
(Figure 10.4). It is a somewhat arbitrary standard, established back in the
ffties when someone went out on the front lawn of the Bureau of National
190..cinematography:.theory.and.practice.
Figure 10.2. (top) The basic color wheel
shows the primary colors: Long wave-
lengths (Red), Medium wavelengths
(Green) and short wavelengths (Blue)
and the secondary colors that result
from mixing two primaries equally .
Figure 10.3. (bottom) Newton’s innova-
tion was to take the spectrum and bend
it into a circle . Note that the color wheel
GREEN Yellow has magenta even though it does not
appear on the spectrum . The secondar-
ies Cyan and Yellow are mixtures of the
primaries on either side of them on the
spectrum, but Magenta is a mixture of
Red and Blue, which are at opposite
ends of the spectrum .
Cyan RED
BLUE Magenta
be the same orange color as the light bulbs we use in filmmaking; in other
words, the Kelvin scale is sort of a common sense application of the terms
5,600K — Average Daylight
“red hot,” “white hot,” etc. Here’s the funny part: a lower temperature
makes the color “warmer,” where logic would imply that making it hotter 4800K — Sunlight at Noon
makes it “warmer” in color. On this scale the average color temperature of
daylight is 5600K—5500K and 6500K are also sometimes used as the aver-
age “daylight.” Consumer cameras (and most DSLRs) have other preset 3200K — Incandescent (Tungsten)
white balances in addition to daylight and tungsten. These usually include 2600K — Candle Light
color 191
Figure 10.5. (above) Hue, what most
people think of as “color” is measured
around the color wheel Value goes
from dark to light (Red is used as the
example here), and Saturation is the
“colorfulness”—high saturation is the
color as very intense and low saturation
is the color as a pale tint
Figure 10.6. (right) The two axes
of color: Red/Orange-to-Blue and
Magenta-to-Green They are completely
separate and have to be measured sep-
arately, which is why all color meters
have two diferent measurements:
Degrees Kelvin and CC (Color Correction)
Index WARM AND COOL
On the warmer end of the scale are things like household light bulbs, can-
dles, and other smaller sources which are in the 2000K to 3000K range.
All this brings us to the question: what do we mean when we say “warm”
or “cool” colors? Clearly it doesn’t relate to the color temperature; in fact,
it works in the opposite direction (Figure 10.11).
Color temperatures above 5000K are commonly called cool colors
(blueish), while lower color temperatures (roughly 2,700–3,500K) are
called warm colors (yellow through red). Their relation on the color wheel
is shown in Figure 10.13. The color temperatures of black body radiators
192..cinematography:.theory.and.practice.
Figure 10.8. (top) A neutral gray scale
with the color balance skewed toward
warm light Notice how the trace on
the vectorscope is pulled toward red/
orange
Figure 10.9. (middle) The same chart
with color balance skewed toward
blue—the vectorscope trace is pulled
toward blue
Figure 10.10. (bottom) The gray scale
with neutral color balance—the vec-
torscope shows a small dot right in the
center, indicating that there is no color
at all—zero saturation
are also shown in Figure 10.14; it is called the black body locus, meaning that
it is a collection of points, not a single instance. Color temperature is a
useful tool but it doesn’t tell us everything about the color of a light source;
specifcally, it doesn’t tell us anything about how much green or magenta is
in a light’s radiation. This is a problem because many modern light sources
have a lot of green in them, usually a very unpleasant amount, especially
for skin tone; these sources include fuorescent tubes, CFLs (Compact Flo-
rescent Lights) and even some HMIs (which we’ll talk about in the chapter
Lighting Sources). All of these produce light by electrically exciting a gas
(LEDs work by a process called electroluminescence).
Because color temperature and green (and its opposite magenta) are two
diferent aspects of light, they have to be measured separately. For this
reason, color meters will have two scales and output two measurements:
one in degrees Kelvin (color temperature) and the other in Color Compensation
(CC) units (the magenta/green scale, sometimes called tint). Cameras will
have similar controls.
WHITE BALANCE, BLACK BALANCE, AND BLACK SHADING
A white piece of paper will appear white to us whether it’s in a fuorescent-
lit ofce, a tungsten-lit living room, or outdoors in the noon day sun. This
is because our brain “knows” that it’s white so it just interprets it as white.
This can accommodate a wide range of color sources but, of course, in
extreme conditions under a single color source such as pure red or blue,
it does break down and we see the paper as taking on the color of the
source. Cameras don’t have the ability to adapt in this way: they will record
the color of light refected of the paper just as it is. This is why we have
color..193.
to white balance cameras. In older HD cameras, this white balance is baked
in; with cameras that shoot RAW, it is recorded in the metadata. Figures
10.11 and 10.12 show incorrect and adjusted color balance.
Most cameras will have preset color balances but it is usually more accu-
rate to do an active white balance on the set. The VFX people will appre-
ciate this as well, as they frequently have to artifcially re-create the light-
ing conditions. It is helpful to understand how a camera accomplishes this
(Figure 10.7). Conceptually, it is a process of sensing what light is in the
scene and “adding” the opposite so that they balance back to neutral white.
Technically, cameras achieve this by adjusting red and blue in relation to
green. In traditional HD cameras (and even in a few RAW cameras) this
is done by adjusting the gain on the three color channels. In cameras that
shoot RAW, the gain of the color channels is not changed in the camera
(except on some Canon and Sony cameras), the color balance adjustments
Figure 10.11. (top, left) This shot was
lit entirely by skylight, which is blue On are recorded in the camera and the actual changes occur in color grad-
the left, the camera is set to Tungsten ing. Black Balance is also extremely important; most cameras will perform
balance and the resulting shot is very a black balance automatically, usually with the lens capped for complete
blue, it looks unnatural
darkness. Without proper black balance, some colors won’t reproduce
Figure 10.12. (top, right) The same shot properly—see Measuring Digital. Black Shading is diferent. The Red com-
with the camera white balanced to the pany explains it like this: “Noise in any digital image is the result of both
existing light Because it is ambient day-
light, it is very slightly blue so just set- ‘fxed pattern’ and random noise. The former is caused by persistent varia-
ting the camera to Daylight would not tions in light sensitivity between pixels, whereas the latter is caused by
give a precise color balance
thermal fuctuation, photon arrival statistics, and other non-repeatable
Figure 10.13. (above) Warm and cool sources. Everything else being equal, fxed pattern noise is, therefore, the
colors This is a psychological phenom- same for every image, whereas random noise is not.
enon; in terms of physics, the color
temperature of Blue is “hotter” (higher “Black shading works by measuring the pattern of fxed noise, storing it
in degrees Kelvin) than Red but this has in memory, and then subtracting it out of all subsequent frames—leaving
little to do with our perceptual reaction
to colors only random noise behind. The pattern stored in memory is called a Cali-
bration Map, and is efectively a map of the black level for every pixel—
hence the name black shading. With Red cameras, it’s only necessary when
exposure conditions difer substantially, such as with extreme changes in
temperature and exposure time, or after a frmware update.”
MAGENTA VS. GREEN
We tend to think of this process as purely about tungsten vs. daylight; this
is probably because in the days of shooting on flm emulsion, the manufac-
turers made only two types of flm: tungsten or daylight. When shooting
flm in a lighting situation that included green (such as fuorescent in an
ofce) you had only a few choices:
• Turn of the fuorescents and provide your own lighting.
• Put Minus-Green (magenta) gels on the fuorescents.
• Filter the lens with a Minus-Green (magenta) flter.
• Add Plus-Green to your lights on the set and shoot a gray card for
reference and fx it in post.
194..cinematography:.theory.and.practice.
Figure 10.14. The anatomy of the CIE
Diagram Not unlike the color wheel,
380 480 500 520 560 600 680 it starts with the spectrum The Spec-
Blue Cyan Green Yellow Red tral Locus (horseshoe) represents the
colors of the spectrum at maximum
saturation, with saturation decreasing
as you move toward the center, where
all colors mix to create white At the
520
bottom is the Line of Non-Spectral Pur-
The Spectral Locus ples (Magentas) which are not spectrally
(Spectrally pure hues)
pure in that they can only be achieved
by mixing—they don’t appear in the
original spectrum, just as there is no
magenta in a rainbow Near the middle
560
is the locus of the colors of a black body
radiator at various temperatures As you
can see, no single one of them can be
500 considered to be “pure white ”
600
2000K
3200K
“White” 6000K
10000K 680
les
l Purp ing)
480
ctra y mix
Spe d b
N on- chieve
f a
e o y be
380 Lin n onl
(Ca
As you can guess, this is not always necessary with a video camera, because
white balance function on the camera is capable of removing any color cast
that takes the image away from an overall neutral white appearance, which
obviously does not apply to creative choices concerning color tone. This
includes green/magenta imbalance as well as red/blue (daylight/tungsten)
color shifts. Keep in mind, however, that altering the lighting is still often
the best choice for maintaining control of the color. Lighting situations,
especially those that involve a mixture of diferent types of sources, are
seldom about just getting a mechanically “OK” neutral appearance. More
importantly, fuorescent and other “green” sources are seldom full-spectrum
light—their emissions are discontinuous and heavy in the green part of the
spectrum and thus won’t reproduce color faithfully.
THE CIE DIAGRAM
Over the centuries, there have been dozens of color systems—attempts to
quantify color in diagrams or numbers. To some extent, all of them are
pretty much theoretical. What was needed was a system based on actual
human perception of color. This is why the CIE (International Commis-
sion on Illumination or in French Commission Internationale de l’Eclairage) was
formed in 1913 to conduct research into color science to develop interna-
tional standards for colorimetry.
THE SPECTRAL LOCUS
Today, you will see the CIE chromaticity diagram (Figure 10.14) just about
everywhere that video color is being discussed. It is so signifcant that we
need to examine it in detail. First is the “horseshoe” shape with the various
hues. The curved outline of this diagram is really just the spectrum bent
into the horseshoe shape. This outer boundary is called the spectral locus—it
is the line of the pure hues of the spectrum at maximum saturation. Also
note that not all hues reach maximum saturation at the same level.
Within the area enclosed by the spectral locus are all of the colors that the
human eye can perceive. One tenet of color science is that if the human
eye can’t see it, it isn’t really a color. Sometimes color science strays over
that boundary into what are called imaginary colors or non-realizable color,
but these are for purely mathematical and engineering reasons.
color..195.
Figure 10.15. Parade view on the wave- THE WHITE POINT
form monitor clearly shows the incor-
rect color balance of what should be a In the center is a white point, where all the colors mix together to form
neutral gray chart On the waveform, white. It is not a single point—the CIE includes several white points called
the Red channel is high, while Green is illuminants, all of them along the black body locus we talked about earlier.
a bit lower and Blue is very low (top end
of each channel is circled in this illus- In the example shown in Figure 10.16, the white point shown is D65,
tration) This is why so many colorists which is roughly the same thing (in theory) as a scene lit with daylight bal-
and DITs say that they “live and die by
parade view ” ance light. Other CIE Illuminants include A: Tungsten, F2: Cool White
Fluorescents, and D55: 5500K. There is no one ofcial standard but D65
(6500K) is the most widely used as the white point for monitors.
THE LINE OF PURPLES
Along the bottom straight edge is an especially interesting part of the
diagram: the line of non-spectral purples, commonly called the line of purples.
Think back to Newton’s color wheel, which bent around to join the short
wave end of the spectrum (Blues) with the long wave end (Reds) which
results in magenta/purples: colors that can only be achieved by mixing.
GAMUT
The CIE diagram shows all colors that human vision can perceive, but
currently no electronic method can represent all of them. So within the
horseshoe we can place representations of the various degrees of color that
cameras, monitors, projectors, or software can achieve—this is called the
gamut. The limits of gamut are important measures of a particular camera
or a system of color. As with dynamic range (which we usually think of as
grayscale range) cameras, monitors, and projectors are steadily improving
in their gamut range. Gamut is most easily visualized on the CIE diagram;
it is defned by its primaries, usually in the areas of Red, Green, and Blue
and its white point. Figure 10.16 shows the gamuts of various color spaces.
It also identifes one of the CIE-defned illuminants—D65, or 6500K.
Once you have defned a gamut, it is easy to tell, either graphically or
mathematically, when something is out of gamut, meaning it falls outside
the triangle. As color is passed from one device to another or one color space
to another, it is possible for some color points to be out of gamut. This can
be dealt with in a variety of ways: the color can simply be clipped, or it can
be brought back in through the mathematical operation of a matrix trans-
form or with a Look Up Table (LUT), which we’ll discuss in Image Control.
VIDEO COLOR SPACES
Now that we have some basic concepts and terminology, let’s look at some
color spaces that are used in production, post, and distribution. Having a
solid understanding of the various color spaces, their potential, and their
limits is important for cinematographers, colorists, editors, VFX people,
and those involved in mastering and distribution. Following are some of
the most widely used color spaces in HD and UltraHD video work.
196..cinematography:.theory.and.practice.
Figure 10.16. The relative gamuts of
0.8 Rec. 2020 film, Rec. 2020, DCI P3, and Rec.709 .
This clearly shows the value of the CIE
chart in comparing the relative gamuts
0.7 (limits) of various imaging color spaces .
Rec . 2020 is the gamut for UHD video .
0.6
Remember that these are the theoreti-
cal limits of the gamuts of various color
spaces—whether or not individual
0.5
Film cameras, projectors, monitors, or color
Rec.709 correction systems achieve these limits
is a different issue .
y axis
0.4
DCI P3
0.3 D65
0.2
0.1
0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
x axis
R
color 197
Figure 10.17. (top) All matrix con-
trols set at zero on a Sony F3—camera
default The Macbeth Color Checker R
MG
YL
G CY
MG
YL
G CY
THE MATRIX
The matrix refers to a mathematical/electronic function that controls
how colors are converted from the sensor to camera output. In short, the
matrix in the camera controls the way in which the red, green, and blue
signals from the sensors are combined. It is a mathematical formula that
can be altered to suit the needs of the shot and the look you’re going for. It
is not to be confused with white balance—which alters the overall color cast
of the scene to adjust to diferently colored lighting sources (and some-
times as a creative choice as well). White balance is an overall shift of all
colors in one direction or another—usually toward blue (daylight balance)
or toward red/orange (tungsten balance) but may also include adjustments
for the green in fuorescents, for example. Matrix adjustments generally
have little efect on the appearance of whites and neutral colors (such as the
gray card); see Figures 10.17, 10.18, and 10.19.
Art Adams has this to say: “I describe the matrix as adding and subtract-
ing color channels from each other. Not colors, but channels—that’s a big
diference. Channels are just light and dark, like the black and white nega-
tives from three-strip Technicolor. The dye flters on a sensor’s photosites
require some overlap so they can reproduce intermediary colors. The way
to get pure color is to subtract one color channel from another to remove
its infuence. Or, if a color is too saturated, you can add some of another
channel’s color to desaturate it. Do the blue photosites respond a little
198..cinematography:.theory.and.practice.
Figure 10.20. AMPAS ACES color space
is actually larger than the gamut of the
human eye For mathematical reasons,
the ACES color space includes not only
every color the eye can perceive but
also colors outside the spectral locus
These are called non-realizable or imagi-
nary colors They basically exist only to
make the equations come out right
too much to green, adding a bit of blue to anything that’s green? Subtract
some of blue’s channel from green’s channel and you can clear that right
up. It’s not as simple as this, but that’s the idea.”
As an example, the Sony F3 (which does not shoot RAW but has many
other controls over the image) has several matrix presets in the Picture Look
menu. This camera also ofers several preset matrix combinations: in addi-
tion to a Standard matrix setting, there is a HighSat matrix which heightens
the saturation of the colors, a Cinema setting which more closely simulates
the color tone reproduction of flm shooting and several other selections,
and fnally an FL Light setting for shooting with fuorescents.
Here it is important to remember that fuorescent lighting involves more
than just a white balance setting, which is why there is no matrix preset for
Tungsten or Daylight—those adjustments can be handled adequately with
either using the white balance presets or by shooting a gray card or neutral
white target under the scene lighting conditions. Many cameras have the
ability to store user set matrix adjustments. Typically, for cameras that
have matrix controls, the parameters that can be adjusted are:
• R-G: Red has only its saturation changed, while Green has both
its saturation and hue (phase) altered.
• R-B: Red keeps the same hue but saturation changes, but Blue
changes in both hue and saturation.
• G-R: Green changes only saturation but Red can vary in both hue
and saturation.
• The same concept applies to G-B (green-blue), B-R (blue-red),
and B-G (blue-green).
color..199.
COLOR BALANCE WITH GELS AND FILTERS
The term gel refers to color material that is placed over lights, windows,
or other sources in the scene. Filter is the term for anything placed in front
of the lens to (among other things) control color. There are three basic
reasons to change the color of lighting in a scene, which can be done by
adding gels to the sources or by using daylight or tungsten units, or a com-
bination of them:
• To correct (convert) the color of the lights to match the flm type
or color balance of a video camera.
Figure 10.21. (top) A conceptual dia- • To match various lighting sources within the scene to achieve
gram of the three controls in the ASC- overall color harmony. The trend nowadays is to let the sources
CDL system—Power, Ofset, and Slope be diferent colors.
and how they afect the image The
colors here are diagrammatic only; they • For efect or mood.
do not represent color channels
Gelling the lighting sources gives you more control over the scene since
Figure 10.22. (above) All companies not all lights have to be the same color. Using a flter on the camera makes
that make lighting gels have gel books everything uniformly the same color. The exception to this is flters called
available for free These include sam-
ples of their color gels and difusions grads, which change color from top to bottom or left to right or diagonally
This gel book from Rosco shows CTB, depending on how they are positioned. It is important to remember that
CTO and Minus-Green
gels and flters only work by removing certain wavelengths of light, not
by adding color.
The three basic flter/gels families used in flm and video production are
conversion, light balancing, and color compensating. This applies to both light-
ing gels and camera flters. See Table 10.3 for just a few of the most com-
monly used types. There are also hundreds of gels that are random, non-
calibrated colors called party gels.
CONVERSION GELS
Conversion gels convert daylight to tungsten or tungsten to daylight bal-
ance. They are by far the most commonly used color gels in flm and video
production. They are an essential part of any gel package you bring on
a production. In general, daylight sources are in the range of 5400K to
6500K, although they can range much higher. Near sunrise and sunset,
they are much warmer because the sun is traveling through a much thicker
layer of atmosphere and more of the blue wavelengths are fltered out. The
amount of dust and humidity in the air are also factors, which accounts for
the diferent colorings of sun and sky at various times of the day, in difer-
ent locales, or in diferent weather conditions. Daylight sources include:
• Daylight itself (daylight is a combination of direct sun and sky).
• HMIs, Xenons, and some LED lights.
• Cool-white or daylight-type fuorescents.
• Color-correct fuorescent tube that can be daylight balance.
• Dichroic sources such as FAYs.
• Arcs lights with white-fame carbons (rarely used nowadays).
200..cinematography:.theory.and.practice.
CAMERA FILTERS FOR INDUSTRIAL SOURCES
Color Balance Existing Source Camera Filters
High-Pressure Sodium 80B + CC30M
Tungsten Metal Halide 85 + CC50M
Mercury Vapor 85 + CC50B
High-Pressure Sodium 80B + CC50B
Daylight Metal Halide 81A + CC30M
Mercury Vapor 81A + CC50M
color..203.
Figure 10.26. Color plays a key role in CORRECTING OFF-COLOR LIGHTS
this frame from Nightcrawler
HMI
HMIs sometimes run a little too blue and are voltage dependent. Unlike
tungsten lights, their color temperature goes up as voltage decreases. For
slight correction Y-1 or Rosco MT 54 can be used. For more correction,
use 1/8 or 1/4 CTO. Many HMIs also run a little green or magenta. Have
1/8 and 1/4 correction gels available.
INDUSTRIAL LAMPS
Various types of high-efciency lamps are found in industrial and public
space situations. They fall into three general categories: sodium vapor, metal
halide, and mercury vapor. All of these lights have discontinuous spectra and
are dominant in one color. They all have very low CRIs. It is possible
to shoot with them if some corrections are made. High-pressure sodium
lamps are very orange and contain a great deal of green. Low-pressure
sodium is a monochromatic light and so they are impossible to fully cor-
rect.
CAMERA FILTRATION FOR INDUSTRIAL SOURCES
Table 10.2 shows recommended starting points for using camera fltra-
tion to correct of-balance industrial sources. They are approximations
only—obviously, these units are not manufactured with much attention
to precise color balance. These gel combinations should be confrmed with
metering and testing. In video you may be able to correct partly with the
camera’s white balance function. In flm, never fail to shoot a grayscale and
some skin tone for a timer’s guide. Only with these references will the
color timer or video transfer colorist be able to quickly and accurately cor-
rect the color. For more on shooting the color reference, see the chapter
on Image Control. For more detailed information on controlling color on
the set and on location, see Motion Picture and Video Lighting and The Film-
maker’s Guide to Digital Imaging, also by Blain Brown and from Focal Press.
COLOR AS A STORYTELLING TOOL
As with everything in flm and video production, stylistic choices afect
the technical choices and vice versa. This is especially true with color cor-
rection. Until a few years ago, considerable time and money were spent on
correcting every single source on the set. Now there is more of a tendency
to “let them go green” (or blue or yellow or whatever)—a much more
naturalistic look and has become a style all its own, infuenced by flms
such as The Matrix, Fight Club, Se7en, and others. More extreme color
schemes for particular scenes or the entire flm are frequently used. These
are accomplished through lighting on the set, fltration, adding a Look or
LUT to the camera while shooting, or with color correction at the DIT
cart or fnal color correction in post.
204..cinematography:.theory.and.practice.
FILM COLOR PALETTES Figure 10.27. The striking color of her
yellow dress is a strong color contrast to
On the following pages are the color palettes of several flms and a music the evening sky in this shot from La La
video. These color analysis charts were created by Roxy Radulescu for her Land
website Movies In Color, and are used with her permission.
On pages 214 through 218 are some examples from the music video Free-
dom! (DP Mike Southon), The Fall (Colin Watkinson), Snatch (Tim Mau-
rice-Jones), Delicatessen (Darius Khondji), Fight Club ( Jef Cronenweth)
and Ju-Dou (Changwei Gu and Lun Yang), directed by Zhang Yimou, who
was a cinematographer himself before he turned to directing.
color..205.
Figure 10.28. (top) Irma La Douce,
a good example of the “Technicolor
look”—cinematography by Joseph
LaShelle These color analysis charts
were created by Roxy Radulescu for her
website Movies In Color and are used
with her permission
Figure 10.29. (above) Burn After Read-
ing—DP Emmanuel Lubezki.
206..cinematography:.theory.and.practice.
Figure 10.30. (top) Apocalypse Now—
Vittorio Storaro.
Figure 10.31. (above) Pina—Cinema-
tography by Hélène Louvart and Jörg
Widmer
color..207.
Figure 10.32. (top) Breaking Bad—
Michael Slovis
Figure 10.33. (above) Chinatown—
John Alonzo
208..cinematography:.theory.and.practice.
Figure 10.34. (top) The Dark Knight—
Wally Pfster.
Figure 10.35. (above) Midnight In
Paris—Darius Khondji.
color..209.
Figure 10.36. (top) Moulin Rouge—Don
McAlpine
Figure 10.37. (above) Pulp Fiction—
Andrzej Sekula
210..cinematography:.theory.and.practice.
Figure 10.38. (top) Batman Begins—
Wally Pfster
Figure 10.39. (above) Gladiator—John
Mathieson
color..211.
Figure 10.40. (top) Until the End of the
World—Robby Mueller.
Figure 10.41. (above) Sherlock
Holmes—Philippe Rousselot
212..cinematography:.theory.and.practice.
Figure 10.42. (top) House of Flying Dag-
gers—cinematography by Xiaoding
Zhao.
Figure 10.43. (above) Snatch—Tim
Maurice-Jones
color..213.
Figure 10.44. The color pattern of the music video for Freedom! (1990), photographed by DP Mike Southon
214..cinematography:.theory.and.practice.
Figure 10.45. Bold color in Tarsem Singh’s The Fall (2006), photographed by Colin Watkinson
color..215.
Figure 10.46. Color pattern of Lock, Stock and Two Smoking Barrels (1998); DP—Tim Maurice-Jones
216..cinematography:.theory.and.practice.
Figure 10.47. Fight Club (1999), photographed by Jef Cronenweth
color..217.
Figure 10.48. Color in Ju-Dou (1990) by Zhang Yimou—cinematography by Changwei Gu and Lun Yang
218..cinematography:.theory.and.practice.
11
image control
GETTING THE LOOK YOU WANT
Let’s take a look at the tools we have available to control, and manipulate
our images. In front of the camera:
• Lighting.
• Choice of lens.
• Filters.
• Mechanical efects (smoke, fog, rain, etc.).
• Choosing time of day, direction of the shot, weather, etc.
• In the camera:
• Exposure.
• Frame rate.
• Shutter speed.
• Shutter angle.
• Gamma.
• White balance.
• Color space.
And of course, in shooting flm, you have the option of choosing the flm
stock and perhaps altering the processing and changing the look by color
correcting during printing. These options are not only available when
shooting digital, we now have much more control than before.
As always, we can dramatically change the nature of the image in the
camera and afterwards. This has been true since the early days of flm but
with digital images, the amount of control later on is even greater. As one
cinematographer put it, “The degree to which they can really screw up
your images in post has increased exponentially.” But notice that the frst
sentence of this paragraph didn’t say “in post.” The reason for that is there
is now an intermediate step—the DIT, Digital Imaging Technician.
AT THE DIT CART
The DIT cart is not really postproduction, but it’s not “in the camera”
either—it’s an in-between stage where there are all sorts of options. On
some smaller productions, the DIT may be concerned with nothing more
than downloading media fles; in this case, they are more properly a Data
Manager, Digital Acquisition Manager, Data Wrangler or similar terminology.
On other productions the DIT cart might be a central hub of the creative
process where looks are being created, controlled, and modifed; where
camera exposure and lighting balance are being constantly monitored or
even directly controlled; and where the director, DP, and DIT are involved
in intense conversations and creative back-and-forth concerning all aspects
of visual storytelling.
These aspects of the visual image are what we are concerned with in this
chapter: they are both camera issues and postproduction/color correction
issues. In digital imaging, the dividing line between what is “production”
and what is “post” is indefnite at best.
WHAT HAPPENS AT THE CART
We only mention the DIT here to show it’s place in the workfow on the
set. We’ll go more deeply into what the DIT does and what happens at the
DIT cart in the chapter DIT & Workfow.
In some cases, the images from the camera receive no processing at all
as they pass through the DIT station, and the process is more clerical and
organizational—a matter of downloading, backing up, and making shuttle
drives to deliver to postproduction, the production company, archives, etc.
In this case, the operator is called Data Manager or Data Wrangler. In other
cases, the DIT spends a good deal of time rendering dailies for use by the
director, diferent ones for the editor, and a third set for the VFX process.
220..cinematography:.theory.and.practice.
As previously noted, in some instances, there are substantial changes to the Figure 11.1. (top) The DSC Labs One-
appearance of the image and even color correction and creating a look. In Shot photographed in daylight with the
camera on Tungsten setting
short, there is a wide range of functions at the DIT cart. In this chapter
we’re going to talk about only processes that afect the look of the images: Figure 11.2. (above) DaVinci Resolve has
the ability to automatically color correct
in some cases just making them viewable and in other cases, applying cre- the OneShot and other test charts After
ative choices that are part of the visual story. So let’s add to our list of correction, the three color channels are
at the same levels in the highlights and
image controls and bring in some of the topics we’re going to talk about midtones and the color of the shot is
here: normalized
image.control..221.
Figure 11.3. (above) The Tangent Wave
2 control surface for color correction
Instead of rings around the trackballs,
this control has separate dials for Lift,
Gamma, and Gain
Figure 11.4. (right) Functions of the
wheels on a color grading panel, in this
case, the control for Lift
image.control..223.
Figure 11.10. (top) Crossed gray gradi-
ent curves with gamma at normal The
gradients display normal contrast with
even distribution of tones from black to
white
Figure 11.11. (second down) The same
gradients with gamma raised; the efect
is lower contrast and raised midtones
This mirrors the gamma curves we saw
in the chapter Linear, Gamma, Log
Figure 11.12. (third down) The gradi-
ents with gamma lowered; the efect
is raised contrast In both cases, there
is still pure black at one end and pure
white at the other end, but the mid-
tones are altered signifcantly
Figure 11.13. (fourth down) Gain raised;
in this case, it causes clipping This is
just an example, raising the gain does
not always cause clipping, although it
is always a danger Just as with lift, the
midtones are also changed
Figure 11.14. (bottom) Gain lowered
Highlights become gray Note that in
both examples, middle gray also moves
up or down
GAMMA/MIDTONES
Gamma/Midtones afects the medium tones of the picture. In practice, they
can be seen as contrast adjustment. As we noted in Linear, Gamma, Log,
the term gamma is used a bit loosely in the video world, but it is largely
the same concept here. In Figures 11.10 through 11.12, you can see how
Gamma afects the middle range of the gray scale: it can take these tones
up or down while pure black and pure white remain anchored; this gives
it the bowed shape.
GAIN/HIGHLIGHTS
Gain (sometimes called Highlights) afects the brightest areas of the image
the most (Figures 11.13 and 11.14). Similar to Lift, it is anchored at the
dark end and so has very little efect there, while at the highlight end, it has
freedom to range up and down the scale.
CURVES
In addition to separate controllers for each segment of the grayscale, most
applications allow you to draw curves for the image—to independently
manipulate the response curve, usually with Bezier controls. This can be
a fast and efcient way to work. Figures 11.15 and 11.16 show an image
adjusted by using Curves in Assimilate Scratch.
image.control..225.
O˜set Moves All
Tones Up or Down
image.control..227.
Figure 11.20. (top) A 1D LUT has sepa-
rate tables for each color channel, how-
ever for imaging purposes, it is almost
always three 1D LUTs; one for each color
channel (Illustration courtesy of Light
Illusion)
Figure 11.21. (below) A 3D LUT is a cube
or lattice The values of 0 to 255 in both
of these are the digital color values
(Illustration courtesy of Light Illusion)
• Color balance.
• Color alteration.
• Efects.
DIFFUSION AND EFFECTS FILTERS
There are many types of difusion flters, but they all have one common
purpose: they slightly alter the image to make it softer or more difuse or
to reduce the contrast. They do it in a number of ways and with a variety
of efects (Figure 11.31). Nearly all difusion flters come in grades: 1, 2, 3,
4, 5, or 1/8th, 1/4, 1/2, 1, 2, and so on. Most rental houses will send them
out either as a set or as individual rentals. All types of flters usually come Table 11.1. (below) Conversion factors
in a protective pouch and often have some soft lens tissue wrapped around for the 85 series of warming flters
them inside the pouch to make it easier to handle the glass without adding Table 11.2. (bottom) Conversion fac-
fngerprints that need to be removed later. tors for the 80 series of cooling flters
Difusion flters are a very personal and subjective subject. Besides glass FILTER
85A
CONVERSION
5500 > 3400
Mired
+112
EXP. LOSS
2/3 stop
or resin flters, which are placed in front of the lens (or in some cases 85B 5500 > 3200 +131 2/3 stop
behind the lens or even in a slot in the middle of it), other methods such as 85C 5500 > 3800 +81 1/3 stop
nets can be used. An older type of flter that dates back to the early days of
the studio system but that is still popular today are the Mitchell difusions, FILTER CONVERSION Mired EXP. LOSS
which come in grades A, B, C, D, and E. 80A 3200 > 5500 -131 2 stops
80B 3400 > 5500 -112 1 2/3 stops
There are some things to be aware of when using difusion flters. They 80C 3800 > 5500 -81 1 stop
will give diferent degrees of difusion depending on the focal length of the 80D 4200 > 5500 -56 1/3 stop
image.control..233.
Figure 11.31. Various grades of difu- lens being used—a longer focal length lens will appear to be more heavily
sion—the Schneider Black Frost® series
(Photo courtesy of Schneider Optics) difused. Some DPs drop down to a lower degree of difusion when chang-
ing to a longer lens. Tifen, a major manufacturer of camera flters, has cre-
ated digital flters in its Dfx software, which can be used in postproduction
to reproduce the efect of their glass flters. Be careful about judging the
efect of flters on a small on-set monitor, which can be deceiving.
NETS
Another form of difusion is nets or voiles. Many cinematographers use silk
or nylon stocking material, which can have a very subtle and beautiful
efect. Nets vary in their difusion efect according to how fne their weave
is. Nets can come in flter form sandwiched between two pieces of optical
glass, or they might be loose pieces cut and attached to the front or rear
of the lens. Camera assistants always have scissors in their kit for this and
other tasks. Attaching a net to the rear of the lens has several advantages. A
net on the front of the lens can come slightly into focus with wider lenses
that are stopped down. A net on the rear of this lens is less likely to do this,
234..cinematography:.theory.and.practice.
Figure 11.32. (top) A landscape scene without a flter (left) and with a LEE Sepia 2® and 75ND flters (Photo courtesy David
Norton and LEE Filters) Figure 11.33. (second down) No polarizer (left) and with polarizer (Photo courtesy David Norton and
LEE Filters) Figure 11.34. (third down) With a LEE Sunset 2® flter (Photo courtesy David Norton and LEE Filters) Figure 11.35.
(bottom) Without (left) and with (right) a LEE 0.9 Neutral Density Hard Grad® (Photo courtesy David Norton and LEE Filters)
image.control..235.
Figure 11.37. (opposite page, top) The although it still can happen. Also, the difusion efect will not change as
Tifen Black Difusion FX Note espe-
cially the efect on highlights, such as the lens is stopped down or the focal length changes on a zoom. Attach-
the light bulb (Photo courtesy Tifen) ing a net to the rear of the lens must be done with great caution as there is
Figure 11.38. (opposite page, middle)
danger of damaging the exposed rear element of the lens or of interfering
Tifen’s extremely popular Black Pro- with the spinning refex mirror. A hidden tear or run when mounting the
Mist®; in this example a grade 1/2 lens will make the difusion efect inconsistent. Putting a net on the rear
The Pro-Mist family of flters creates
an atmosphere by softening excess should be done with easily removable material such as transfer tape—also
sharpness and contrast and creates a sometimes called snot tape—a two-sided soft, sticky tape that is also used
beautiful glow around highlights The for attaching lighting gels to open frames.
flters also come in warm and regular
Pro-Mist Some people call the regular
version “white Pro-Mist ” The Black ver- CONTRAST FILTERS
sion shown here provides less contrast
reduction in the shadow areas, produc- Various flters are used to reduce or soften the degree of contrast in a scene.
ing a more subtle efect (Photo cour- These typically work by taking some of the highlights and making them
tesy Tifen)
“fare” into the shadows. Traditionally these were called “lo-cons.”
Figure 11.39. (opposite page, bottom)
The Warm Black Pro-Mist® combines NEUTRAL DENSITY FILTERS
Black Pro-Mist® and an 812 flter to add a
warming efect. (Photo courtesy Tifen) Neutral density flters are used to reduce overall exposure without afecting
color rendition. They can be used in extremely high-illumination situa-
tions (such as a sunlit snow scene or a beach scene) where the exposure
would be too great or where less exposure is desired to reduce the depth-
of-feld. Also known as Wratten #96, the opacity of ND flters is given in
density units so that .3 equals one stop, .6 equals two stops, .9 equals three
stops, and 1.2 equals four stops. If you combine ND flters, the density
values are added.
Neutral density flters combined with 85 correction flters (85N3, 85N6,
and 85N9) are a standard order with any camera package for flm exterior
work. In video, of course, we can use presets or custom color balance the
camera to suit individual lighting conditions. NDs stronger than 1.2 are
quite common now. They commonly go up to 2.1. Unless NDs are made
using a process that employs thin layers of metal coatings.
EFFECTS FILTERS AND GRADS
There are many kinds of special efects flters, ranging from the most obvi-
When I went around looking at ous to the more subtle. Sunset flters, as an example, give the scene an over-
locations [for Barry Lyndon] with all orange glow. Other flters can give the scene a color tone from moon-
Stanley [Kubrick] we discussed dif-
fusion among other things The light blue to antique sepias (Figures 11.32 and 11.33).
period of the story seemed to call In addition to flters that afect the entire scene, almost any type of flter
for difusion, but on the other hand,
an awful lot of difusion was being is also available as a grad. A grad is a flter that starts with a color on one side
used in cinematography at the and gradually fades out to clear or another color (Figure 11.34). Also com-
time So we tended not to difuse monly used are sunset grads and blue or magenta grads to give some color to
We didn’t use gauzes, for example what would otherwise be a colorless or “blown-out” sky (Figure 11.35).
Instead I used a No 3 Low Contrast
filter all the way through, except Grads can be either hard edge or soft edge.
for the wedding sequence, where Neutral density flters, which are often used to balance the exposure between
I wanted to control the highlights a normal foreground scene and a hotter sky above the horizon. NDs come
on the faces a bit more In that case,
the No 3 Low Contrast filter was in grades of .3 (one stop at the darkest), .6 (two stops), .9 (three stops), 1.2
combined with a brown net, which (four stops), and higher. Be sure to specify whether you want a hard or soft
gave it a slightly diferent qual- cut because there is a considerable diference in application. Whether you
ity We opted for the Low Contrast need a hard or a soft cut will also be afected by what focal length lens you
filter, rather than actual difusion
because the clarity and definition are going to be shooting with. A longer focal length lens is better suited to
in Ireland creates a shooting situ- a hard edge grad, while the hard edge will be visible on a wider lens.
ation that is very like a photogra-
pher’s paradise CONVERSION FILTERS
John Alcott Conversion flters work with the blue and orange ranges of the spectrum
(Barry Lyndon, and deal with fundamental color balance in relation to the color sensitiv-
A Clockwork Orange, ity of the emulsion. Conversion flters afect all parts of the spectrum for
The Shining)
smooth color rendition. (LB) Light Balancing flters are for warming and
cooling; they work on the entire SED (Spectral Energy Distribution) as with
the conversion flters, but they are used to make smaller shifts in the Blue-
Orange axis.
236..cinematography:.theory.and.practice.
Figures 11.40, 11.41, and 11.42. Comparison of three types of Tifen difusion flters
image.control..237.
Figure 11.43. (top) The Tifen Digital Dif- CAMERA LENS FILTERS FOR COLOR CORRECTION
fusion/FX® series are resolution reduc-
ing flters which don’t introduce color Color compensating flters (CC) are manufactured in the primary and sec-
characteristics into the image They are ondary colors. They are used to make corrections in a specifc area of the
designed to reduce high-frequency spectrum or for special efects, although there is always some overlap into
resolution, to make people look great
in HD/UHD without evidence of fltra- adjoining wavelengths. Don’t make the mistake of trying to correct color
tion as only the fne detail is removed balance with CC flters. Primary flters work in a limited band of the
or reduced, while the larger elements
within the image remain sharp (Photo spectrum and correct within a narrow range of wavelengths centered on
courtesy Tifen) the key color. Since CC flters are not confned to the Blue-Orange axis,
they can be used to correct imbalances in the Magenta-Green axis, such as
Figure 11.44. (above) Tifen’s Warm
Soft FX® series of flters combines Soft/ occur with fuorescent lamps. CC-30M (M for magenta) is a good starting
FX® and the Tifen 812® warming flter point for uncorrected fuorescent sources. As a lighting gel, it is known
It smooths facial details while adding as Minus-Green—it’s still magenta, but it is used to reduce the amount of
warmth to skin tones (Photo courtesy
Tifen) green in the light.
WARMING AND COOLING FILTERS
The 80 series, which are blue conversion flters, are used to convert warm
sources such as tungsten lights so that they are suitable for use with day-
light flm (Figure 11.26). The 81 series of warming flters (81, 81A, 81B,
81C) increase the warmth of the light by lowering the color temperature in
238..cinematography:.theory.and.practice.
200K increments (Table 11.1). For cooling, the 82 series works in the same Figure 11.45. (top) Tifen’s Glimmer-
glass® is a difusion flter that softens
fashion, starting with the 82, which shifts the overall color temperature by fne details by adding a slight reduction
+200K. As with most color temperature correction flters, excess magenta of contrast while adding a mild glow to
or green are not dealt with and must be handled separately. Corals are also highlights Ira Tifen comments that, “in
addition to the Glimmerglass difusion
a popular type of flter for degrees of warming. efect, it also glitters visibly on the front
of the lens, which can sometimes add
confdence to an actor’s performance,
CONTRAST CONTROL IN BLACK-AND-WHITE since they can clearly see that the flter
Since color flters transmit some colors and absorb others, this makes them is in place on the lens to make them
look their best ” (Photo courtesy Tifen)
useful in controlling contrast in black-and-white images. Most scenes con-
tain a variety of colors. The sky may be the only blue area in a landscape Figure 11.46. (above) The Smoque® set
shot, a feld of grass may be the only largely green element in a scene, and of flters creates the look and feel of a
smoky veil over the image This look
so on. We can use this to advantage even in black-and-white. cannot produce the efect of that of
real smoke that introduces the ability to
The basic principle of contrast control fltration in black-and-white cin- create shafts of light when real smoke
ematography is that a flter lightens colors in its own area of the spectrum is backlit, but it adds a subtle touch of
and darkens the complementary (opposite) colors. A flter passes light that atmosphere where special efx smoke
machines or hazers can not be used
is its color and absorbs light that is not its color. How strong an efect it has (Photo courtesy Tifen)
image.control..239.
Figure 11.47. (top) HDTV FX® flters are is the result of two factors: how strong the color diferences of the original
designed to address contrast and sharp-
ness issues associated with HD and UHD subject are and how strong the flter is. The scene we are shooting is the
shooting (Photo courtesy Tifen) result of the colors of the objects themselves and the colors of the light
Figure 11.48. (above) Sometimes com- that falls on them. Color flters only increase or decrease contrast on black-
binations of flters are needed to pro- and-white flm when there is color diference in the scene. When a flter
duce the desired efect This example
shows a Soft FX® 1/2 together with an is used to absorb certain colors, we are reducing the total amount of light
Ultra Con® 1. Tifen’s Ultra Cons® work reaching the flm. We must compensate by allowing more overall expo-
by lowering contrast uniformly, so that
shadow areas reveal more detail with- sure. The exposure compensation necessary for each flter is expressed as
out fare or halation from light sources the flter factor. The simple rule for black-and-white flters is: expose for the
or bright refections (Photo courtesy
Tifen) Registered trademarks are the darkest subject in the scene that is substantially the same color as the flter
property of their respective owners and let the flter take care of the highlights.
240..cinematography:.theory.and.practice.
POLARIZERS Figure 11.49. (left) A scene shot
through glass with no polarizer (Photo
Natural light vibrates in all directions around its path of travel. A polarizer courtesy of Tifen)
transmits the light that is vibrating in one direction only. Polarizers serve a
variety of functions. Glare on a surface or on a glass window is polarized as Figure 11.50. (right) With a polarizer—
in this case, the Tifen UltraPol®. (Photo
it is refected. By rotating a polarizer to eliminate that particular direction courtesy of Tifen)
of polarization, we can reduce or eliminate the glare and surface refec-
tion (Figures 11.29, 11.30, 11.49, and 11.50). Brewster’s angle—56° from
normal, or 34° from the surface—is the zone of maximum polarization. A
polarizer can be used to darken the sky. Maximum polarization occurs at
about 90° from the sun. Care must be taken if a pan or tilt is used because
the polarization may change as the camera moves. If the sky is overcast,
the polarizer won’t help much. Polarizers reduce transmission, generally
at least 1 2/3 to 2 stops.
Circular polarizers aren’t really circular—some cameras (in particular,
prism cameras, or DSLRs that use prisms for exposure and auto focus)
don’t like linear (regular) polarizers because prisms introduce some amount
of polarization, and stacking polarizers can do bad things. Circular polar-
izers basically de-polarize the light on the back side of the flter by giving
it a 1/4 wave spin. Messing it up in this way means that it passes through
prisms without any side efects.
IR FILTERS
As we discussed previously, some digital sensors are subject to IR contami-
nation, which means that they are sensitive to infrared wavelengths to an
extent that can signifcantly afect the look of a shot, particularly in day
exteriors. IR density flters prevent this by blocking the wavelengths that
cause problems. Called Hot Mirror or IR flters, they are available in variet-
image.control..241.
Figure 11.51. Tifen’s Black Pro Mist ies that start cutting of infrared at diferent points of the spectrum—they
series is a popular form of difusion were mostly needed by early Red cameras. Some cameras have built-in IR
protection; see Figure 6.25 in Cameras. It’s not so much that sensors are
subject to IR contamination as sensors need to see a certain amount of far
red (red on the edge of the visible spectrum) to make fesh tone look rich
and healthy. Cutting that out makes people look slightly dead. ND flters
cut visible light, but their efect drops of at the edge of visible light, and
that’s where far red lives. The upshot is that ND flters block most visible
light but are transparent to far red. Putting in an ND 1.2 flter blocks four
stops of visible light but doesn’t block far red at all, so opening up the
stop allows four stops more of the far red to pass to the sensor. This isn’t a
camera issue but a dye flter issue.
242..cinematography:.theory.and.practice.
12
lighting sources
Figure 12.1. Arri Skypanels are used to THE TOOLS OF LIGHTING
create the set and a Skypanel on a Junior
ofset arm is attached to the dolly Cinematographers do not need to know all the details of how each piece
(Photo courtesy Arri AG) of lighting equipment works, but it is essential that they know the capa-
bilities and possibilities of each unit, as well as the limitations. A great deal
of time can be wasted by using a light or piece of grip equipment that is
inappropriate for the job.
Motion picture lights fall into several general categories: HMIs, tungsten
Fresnels, tungsten open face lights, LED lights, color-correct fuorescents, practi-
cals, and sunguns. Fresnel means a light that has a Fresnel lens, capitalized
because it is named for its inventor, Augustin-Jean Fresnel. There are also
variations, such as HMI PARs and LED Fresnel units.
COLOR BALANCE
Lighting units can generally be divided into those that output daylight
balance (5500K) or tungsten balance (3200K) light. In lighting “K” has two
meanings—when discussing color temperature, it means degrees Kelvin (see
the chapter Color), and when talking about a lighting unit, it means one
thousand, for example, a 2K is a 2,000-watt light, a 5K is a 5,000-watt
light, and so on.
COLOR RENDERING INDEX
Lights are classifed according to Color Rendering Index (CRI), which is a
measure of the ability of a light source to reproduce the colors of various
objects faithfully in comparison with a natural light source. This means
that a light with a low CRI will not render colors accurately. A CRI of
90 or above (on a scale of 0 to 100) is considered necessary for flm and
video work, and also for still photography. CRI is an older standard and
is not well adapted to modern single-sensor cameras; also, it was only
designed to measure continuous spectrum light. Two new standards have
been developed: CQS (Color Quality Scale) and TLCI (Television Lighting
Consistency Index) and they perform the same function as CRI, although
neither one solves all the problems of measuring the quality of discontinu-
ous spectrum lights.
244..cinematography:.theory.and.practice.
DAYLIGHT/TUNGSTEN SOURCES
While most lighting units are inherently either daylight or tungsten, some
types of lighting units can easily be changed to either color or, in some
cases, changed to greenscreen, fuorescent, or many other color balances.
This is generally done by changing the bulbs. Bi-color LEDs have both
tungsten and daylight bulbs and color is changed by dimming one group
up or down. With remote phosphor LEDs, it is done by changing the phos-
phor screen.
LED LIGHTS Figure 12.2. (top) Two LED light panels
A new and very popular source are LED lights (Figures 12.1, 12.2, 12.3, in use on a moving train Not only are
they compact and generate almost no
and 12.5), which are small and extremely energy efcient, which also heat, they can also run of of batteries
means that they produce much less heat than tungsten lights (where the for extended periods of time—espe-
cially useful for situations like this where
electricity produces 90% heat and only 10% light). LEDs have been incor- AC power may not be available (Photo
porated into all types of units. For lighting fairly close to the scene, they courtesy Adam Wilt)
have many advantages. Their compact size means they can be hidden in Figure 12.3. (above) Like many bi-color
many places on the set and also makes them easier to handle and rig on LED panel lights, this unit from CAME-
location. There are also many LED lights that run on batteries—these can TV features a dimmer and also a color
temperature control This one also has
be very useful for handheld work, camera mounting, and other conditions an LED panel that displays the dimmer
where AC power may not be available. LED units now come in a wide and color status of the light (Photo
variety of types—from small units that can mount on the camera, up to courtesy CAME-TV)
large units with very high output and nearly every type of fxture, and
even Fresnel units up to the equivalent of a 10K (Figure 12.5).
REMOTE PHOSPHOR LEDS
All LED lighting units for flmmaking contain multiple light-emitting
diodes. However, variations in manufacturing processes can result in LEDs
having diferences in their color and output. LED manufacturers have used
binning to deal with this. LEDs are tested and then categorized into one of
a number of groups, or bins, according to the characteristic of light pro-
duced—since there are many rejects, this is one of the reasons for the high
cost of LED lights.
To get away from the need for binning, some manufacturers came up
with remote phosphor technology. While conventional white LEDs ofer the
advantage of providing an integrated device with a known white-light
output, they operate as bright point sources of light, which causes the
uniformity problems alluded to above in applications with a large light-
emitting surface.
lighting.sources..245.
Remote phosphor technology employs a transparent surface onto which
a phosphor coating is applied. These phosphors are excited at precise
wavelengths; in this case, with LEDs completely separated from the phos-
phor substrate, resulting in very stable high-CRI white light. Color tem-
perature remains consistent because the phosphors are not subject to heat
degradation, unlike typical white LEDs.
HMI UNITS
HMIs generate three to four times the light of tungsten lamps but con-
sume up to 75% less energy for the same output. When a tungsten bulb is
color corrected to match daylight, the advantage increases to seven times
because a great deal of the spectrum is absorbed by the blue gel (color tem-
perature blue or CTB). Because HMIs (Figures 12.4 and 12.9) are more ef-
cient in converting power to light, they generate less heat than a tungsten
lamp with the same output.
HMI stands for the basic components: H is from the Latin symbol for
mercury (Hg), which is used primarily to create the lamp voltage. M is
for medium-arc. I stands for iodine and bromine, which are halogen com-
pounds. The halogen serves much the same function as in a tungsten halo-
gen lamp in prolonging the useful life of the bulb and ensures that the rare
earth metals remain concentrated in the hot zone of the arc.
Figure 12.4 (top) An 18K HMI with a HMI lamps have two electrodes made from tungsten, which project into
Chimera softbox in use on a day exterior a discharge chamber. Unlike tungsten bulbs, which have a continuous fla-
location ment of tungsten wire, HMIs create an electrical arc that jumps from one
Figure 12.5. (above) An LED 10K by electrode to another and generates light and heat in the process. Color tem-
Mole Richardson perature as it is measured for tungsten bulbs or sunlight does not technically
apply to HMIs (or to other types of discharge lighting such as fuorescents)
because they produce a quasi-continuous spectrum; instead, the measure
Correlated Color Temperature (CCT) is used. In actual practice, though, the
same measurements and color temperature meters are used for all types of
video and motion picture lighting sources. Our eyes are unreliable in judg-
ing color because our brain adjusts and compensates; it will tell us that a
wide variety of colors are “white.” A color meter or vectorscope is a far more
dependable way of judging color; however, most do not measure CRI.
246..cinematography:.theory.and.practice.
BALLASTS Figure 12.6. Chicken Coops hanging
from the grid create an overall ambi-
All HMIs require a ballast, which acts as a current limiter. The reason for ent light on this green screen shoot A
this is simple: an arc is basically a dead short; if the current were allowed Chicken Coop is a rectangular box con-
to fow freely, the circuit would overload and either blow the fuse or burn taining six 1000-watt bulbs Notice that
some of them have duvetyne skirts to
up. The electronic ballasts also allow the unit to operate on a square-wave control the spill As we’ll discuss in Tech-
(unlike the sine wave of normal alternating current electricity). nical Issues, lighting the green or blue
screen evenly is essential to getting
Flicker-free ballasts use square-wave technology to provide fickerless a proper key This shoot has a camera
shooting at any frame rate. With some units there is a penalty paid for mounted on an extending Technocrane
and another on a sled dolly (Photo
ficker-free shooting at frame rates other than sync sound speed: it results courtesy Seagate Films)
in a signifcantly higher noise level. If the ballasts can be placed outside
or if you’re not recording audio, this is not a problem. It is not usually an
issue as high-speed shooting rarely involves recording audio.
Header cables are the power connection from the ballast to the light head
itself. Many larger HMIs can only use two header cables; a third header
will usually result in a voltage loss too great to get the lamp to fre up.
Square-wave refers to the shape of the sine wave of the alternating current
after it has been reshaped by the electronics of the ballast. Flicker is dis-
cussed in more detail in the chapter on Technical Issues (and in Motion Picture
and Video Lighting by the same author as this book) but sufce it to say here
that the normal sine wave of AC current leaves too many “gaps” in the
light output that become visible if the camera shutter is not synchronized
to its rhythm. By squaring the wave, these gaps are minimized and there is
less chance of ficker. This is especially important if you are shooting at
anything other than normal speed; high-speed photography, in particular,
will create problems. It is important to note that ficker can be a problem
in video also, just as with flm cameras.
Voltages as high as 12,000 VAC (volts AC) or more are needed to start
the arc, which is provided by a separate ignitor circuit in the ballast. This
creates the power needed for the electric current to jump across the gap
between the two electrodes. The typical operating voltage is around 200V.
When a lamp is already hot, much higher voltages are needed in order to
ionize the pressurized gap between the electrodes. This can be from 20
kV (kiloVolt) to more than 65 kV. For this reason, some HMIs can not be
restruck while they are hot—which means you may have to wait for the
light to cool before you can start it again. This can be a major hindrance
when the whole crew is waiting on it. Hot restrike, which generates a higher
lighting.sources..247.
Figure 12.7. Kino Flo’s Celeb LED panels
in use for a twilight car shot (Photo
courtesy of Kino Flo)
When ordering any large lamp, it is crucial to ask these questions and
be sure the rental house will provide the appropriate distribution equip-
ment or adapters—remember, if you don’t order it, it won’t be there. You
must be very thorough when placing an order. Failure to do so may result
in the light not being functional. Some makes of HMIs provide for head
balancing. This is accomplished by sliding the yoke support backward or
forward on the head. This is a useful feature when adding or subtracting
barn doors, frames, or other items that alter the balance of light.
4K AND 2.5K
The smaller HMIs, the 4K and 2.5K, are general purpose lights, doing
much of the work that used to be assigned to 5K and 10K tungsten lights.
Slightly smaller than the bigger HMIs, they can be easily fown and rigged
and will ft in some fairly tight spots.
1.2K AND SMALLER UNITS
The smallest lamps, the 1.2K, 575, 400, and 200 watt HMIs, are versatile
units. Lightweight and fairly compact, they can be used in a variety of sit-
uations. The electronics ballasts for the small units have become portable
enough to be hidden in places where larger units might be visible. They
can also be wall-plugged, which means no generator or other supplemental
power supply is needed on location.
HMI PAR UNITS
Some of the most powerful, intense lights available are HMI PARs; they
have the high output of HMIs and the tightly focused beam of the PAR
refector (Figure 12.9). The largest units are 12K and 18K, but HMI PARs
are made in smaller sizes as well, down to 125 watts. Arri Lighting (part of
the Arri group) makes a popular unit called the Pocket PAR in these smaller
sizes.
One particularly versatile unit is the 1.8K HMI PAR, made by several
manufacturers. What makes it special is that it is small enough (in wattage)
to be plugged into a 20-amp household circuit, but being PARs they have
a healthy output, which in conjunction with their daylight balance means
they have a wide variety of uses in daylight situations: fll when bounced or
through difusion or for a small shaft of light through a window. The Arri
M18 is extremely efcient and is the brightest light that can be plugged
into a household circuit (Figures 12.10 and 12.11).
lighting.sources..249.
Figure 12.9. An HMI PAR providing fll ARRIMAX
light for a rolling shot The car, camera,
and lighting are set up on a low-boy—a A new type of light is the ArriMax, a new power class and type. The fac-
type of low trailer used for these kinds eted refector and open face design of ArriMax fxtures make them very
of shots A normal trailer would place
the car much higher and the shots efcient while retaining the hard light quality of a Fresnel. The 9kw Arri-
would look unnatural Notice that the Max has the output of a 12kw Fresnel, while the 18K surpasses the output
light is not mounted on a light stand or
even a candle stick, but on a purpose of an 18k Fresnel.
built SpeedRail rig (Photo courtesy E
Gustavo Petersen) RULES FOR USING HMI UNITS
• Keep the ballast dry. On wet ground, use apple boxes, rubber
mats, or other insulation material.
• Check the stand and ballast with a meter for leakage by measuring
the voltage between the stand and any ground. There will usu-
ally be a few volts, but anything above 10 or 15 volts indicates a
potential problem.
• Avoid getting dirt or fnger marks on the lamps: oil from the skin
will degrade the glass and create a potential failure point. Many
lamps come provided with a special cleaning cloth.
• Ensure that there is good contact between the lamp base and the
holder. Contamination will increase resistance and impair proper
cooling.
• The flling tip (nipple) should always be above the discharge, or
there is a risk of a cold spot developing inside the discharge cham-
ber.
• Running at above rated voltage may result in failure.
• Excessive cooling or direct airfow on the lamp may cool the lamp
below its operating temperature, which can result in a light with
a high color temperature and inferior CRI.
250..cinematography:.theory.and.practice.
POTENTIAL PROBLEMS Figure 12.10. (left) The Arri M-Series
M18 can be powered from 20 amp
HMIs (or any light with a ballast) may sometimes fail to function prop- household outlets but is 70% brighter
erly. Be sure to have a few extra header cables on hand: they are the most than a typical 1 2K HMI PAR (Photo
courtesy Arri Group)
common cause of malfunctions. The safety switch on the lens can also
cause trouble. Never try to bypass it, however; it serves an important func- Figure 12.11. (above) An Arri M18 with
tion. HMIs should never be operated without the glass lens, which flters ballast It’s an 1,800 watt open face PAR
It has a beam spread adjustable from
out harmful ultraviolet radiation that can damage someone’s eyes. When 20° to 60°
they do fail to fre:
• Check the breakers. Some HMIs have more than one breaker.
• After killing the power, open the lens and check the micro-switch
that contacts the lens housing. Make sure it is operating properly
and making contact. Wiggle it, but don’t be violent.
• If that fails, try another header cable. If you are running more
than one header to a light, disconnect and try each one individu-
ally. Look for broken pins or dirt in the receptacle.
• Check the power. HMIs won’t fre if the voltage is low. Generally
they need at least 108 volts to fre.
• Try the head with a diferent ballast and vice versa.
• Let the light cool. Some lights won’t do a hot restrike.
XENONS
Xenons are similar to HMIs since they are a gas discharge arc with a ballast.
They feature a polished parabolic refector that gives them amazing throw
and an almost laser-like beam. At full spot they can project a tight beam
several blocks with a small amount of spread. Xenons are very efcient
with the highest lumens per watt output of any light. They come in fve
sizes: a 1K, 2K, 4K, 7K, and 10K. There is also a 75-watt sungun unit. The
1K and 2K units come in 110 and 220-volt models, some of which can be
wall-plugged. This produces a high-output light that can be plugged into
a wall outlet or a small portable generator. Larger xenons are extremely
powerful, and must be used cautiously: they can quickly crack a window.
Seventy fve-watt xenon sunguns can be used for fashlight efects. As
with larger xenons, there is a hole or a hot spot in the center of the beam
(depending on the focus) that cannot be eliminated. Xenon bulbs do not
shift in temperature as they age or as voltage shifts.
TUNGSTEN LIGHTS
These lamps have a flament of tungsten wire. There are two types of
tungsten fresnels: studio and baby. The studio light is the full-size unit, and
the baby is a smaller housing and lens, making it more compact for loca-
tion use (Figures 12.12 and 12.13). As a rule, the baby version is the studio
housing of the next smaller size (the body of a baby 5K is similar to the
body of a studio 2K). In most countries outside the United States, the
electrical supply is 220 volts—diferent bulbs are used that are suited to
the appropriate voltage.
lighting.sources..251.
FRESNELS
Fresnel units are lights with lenses. Most flm lights employ the stepped
Fresnel type lens, with a few exceptions that use a simpler plano-convex lens
such as a Dedo or an ellipsoidal (Leko). A Fresnel lens is a stepped ring design
that reduces the thickness of the lens to save on cost and also prevent heat
buildup in the center of the glass, which can cause cracking. The primary
advantage of Fresnels lights is their ability to produce clean, well-defned
shadows.
TWENTY K
The biggest tungsten light now in use is the 20K. It is a large unit with tre-
mendous output. Many jobs that were formerly done by the 10K are now
done with this light. Most 20K units use bulbs that run at 220 volts (which
may require special electrical distribution), and several models come with
an external dimmer (Figure 12.8).
TENNERS
The 10K tungsten Fresnel comes in three basic versions:
• The baby 10K provides high-intensity output with a fairly com-
pact, easily transportable unit with a 14-inch Fresnel lens.
• The basic 10K, known as a “tenner” or studio 10K, has a 20-inch
Fresnel.
• The largest light of this group is the Big Eye tenner, which has
a 24-inch lens. The Big Eye is a very special light with quality
all its own. The DTY (10K) bulb provides a fairly small source,
while the extremely large Fresnel is a large radiator. The result
is a sharp, hard light with real bite but with a wraparound qual-
ity that gives it almost a soft light quality on subjects close to the
light. This is a characteristic of all very big lights that gives them
a unique quality.
It is important to never use a 20K, 10K, or a 5K pointing straight up (this
applies to large HMIs and xenons as well). The lens blocks proper ventila-
Figure 12.12. (top) A Mole-Richardson tion, and the unit will overheat. Also, the flament will not be properly
Baby Junior 2K This is a 2,000-watt supported and will sag and possibly touch the glass.
tungsten Fresnel It is a baby in that it
is smaller than the studio version of the
light; junior is the old studio term for SENIOR/5K
2,000-watt Fresnels Although it is available in both versions, the Baby 5K is far more popular
Figure 12.13. (middle) A Mole Baby 1K than the larger unit. It can work as a general purpose big light and a fll
Fresnel, commonly called a Baby Baby used against a 10K. The 5K is also called a senior.
Baby because it is smaller than the
studio version of this type and also the
term baby is the traditional name for a JUNIOR/2K
1,000-watt Fresnel The 2K Fresnel is also known as a deuce or a junior. It has enough power to
Figure 12.14. (above) The Mole-Rich-
bring a single subject or actor up to a reasonable exposure, even with dif-
ardson open face 2K or Mighty Mole fusion in front of the lens. Juniors are also useful as backlights, rims, and
kickers. Baby juniors (called BJs) are more compact and are extraordinarily
versatile units.
BABY/1K
Thousand-watt units are known as 1Ks (one K) or babies. The 1K is used
as a key light, a splash on the wall, a small back light, a hard fll, and for
dozens of other uses. The baby can use either a 750-watt bulb (EGR) or a
1,000-watt bulb (EGT). Most are now used with the 1K quartz bulb, but
are still sometimes called 750s. The Baby 1K, also called a Baby Baby, is the
small size version (Figure 12.13). Because of its smaller lens and box, it has
a wider spread than the studio baby.
TWEENIE /650
The Tweenie is “between” the 1K and the Inky. The Tweenie is often just
the right light for the small jobs a baby used to do, even as a key light. It is
very useful for a number of small jobs and easily hidden or rigged above
the set.
252..cinematography:.theory.and.practice.
BETWEENIE, INBETWEENIE, INKY, AND PEPPER
These are similar to Tweenies but smaller. The Betweenie is a 300-watt unit
and the InBetweenie uses a 200 watt bulb and is often used instead of an
Inky (also 200 watts). At 100, 200, or 300 watts (depending on the bulb and
size of the housing), the Pepper is a smaller unit, but up close it can deliver a
surprising amount of light. The Inky at 200 watts is great for a tiny spritz
of light on the set, as an eye light, a small fll, or for an emergency last-
minute light to just raise the exposure a bit on a small area.
OPEN FACE
Some 2K, 1K, and 650 units are available as open face lights—that is, they
have no lenses, but they do have some spot/food focusing (Figure 12.14).
Their light is raw and can be uneven, but they do have a tremendous
output for their size. They are good for bounce or shooting through dif-
fusion. They are a good source when all you need is raw power and the
control that a Fresnel afords isn’t needed.
PARS
PAR stands for parabolic aluminized refector. A parabola is an ideal shape to
collect all of the light rays and project them out in the same direction. It
is the shape of refector that is going to give the narrowest, most concen-
trated beam. In conjunction with this, all PAR units have a lens, which
functions primarily to concentrate or spread the beam. Tungsten PARs
generally come with a fxed lens that is part of the unit: they are pretty Figure 12.15. (top) Skypans rigged on
much the same as a car headlight. HMI PARs come with a set of inter- a large set by gafer Michael Gallart
changeable lenses: these go from a very wide beam to a very narrow beam. Skypans are very simple lights—just a
socket for the bulb and a pan refector
The disadvantage of PARs is that the beam generally covers only a very They can use 5K, 10K, or 20K bulbs Also
small area and is not a very complimentary light for actors because it tends on the trusses are 5K Fresnels and space
to be uneven and raw, but it is useful for many purposes that call for just lights (Photo courtesy Michael Gallart)
raw power. Figure 12.16. (above) Two Mole FAY
PARs come in two basic varieties: flm versions come in a solid rotatable lights boxed in with some 4x8 foppies
for control
housing such as Mole-Richardson’s MolePar (Figure 12.19), which feature
barn doors and scrim holders, and in a fimsier theatrical version called a
PAR can. Theatrical lights are not generally as sturdily built because they
are usually hung in a theater and then left alone. They don’t get the rough
treatment and adverse conditions that flm and video lights do. PARs
lighting.sources..253.
Figure 12.17. (top) On the set for the (especially the very concentrated VNSP bulbs) can quickly burn through
burning town in 1917, the tower of even the toughest gels, melt bead board, and set muslin difusion on fre.
multi-PAR units set up to simulate fre
PARs with a dichroic coating have an output that is very close to daylight
Figure 12.18. (above) The lighting (blue) balance. Small PAR 48s and 36s are also available at lower voltages,
tower in daylight with director Sam as well as 110 and 220 volts.
Mendes and cinematographer Roger
Deakins The lights are various sizes of
Maxi-Brutes Fire ficker is achieved by PAR GROUPS
running the lights through a dimmer
board PARs are also made in groups, one of the best known being the Maxi
Brute, a powerful unit with tremendous punch and throw. They are used
for large night exteriors and in large-scale interior applications: aircraft
hangars, arenas, and so on. They can also be used directly or through gel,
muslin, and so on, when very high light levels are needed to get through
heavy difusion. All PARs generate very intense heat; use caution—they
can crack windows and char wood and other materials.
Maxi Brutes and Dinos are similar in design but diferent in size. Maxis
come in confgurations of 6, 9, or 12 x PAR 64 lamps, the most common
being the 9 lamp head. A Dino or Moleeno is 36 PAR 64 lamps. Other varia-
tions of this design exist as well (Figures 12.17 and 12.18).
254..cinematography:.theory.and.practice.
Fay lights are clusters of 650-watt PAR 36s and come in confgurations
up to 9 or 12 lamps (Figure 12.16). Wendy lights, developed by cinematog-
rapher David Watkin, come in large panels with the same PAR 36 lamps
(usually DWE).
All the bulbs on most multi-PARs are individually switchable, which
makes for very simple intensity control. All PAR group lights allow for
spot, medium, and food bulbs to be interchanged for diferent coverages.
The FAY bulbs are dichroic daylight bulbs; tungsten bulbs (FCX) can also
be used. They can be used as daylight fll in place of HMIs. They are not
exactly daylight balance but are very close to and can be corrected with
gels if necessary. Most people refer to any PAR 36 dichroic bulb as a FAY,
but in fact, there are several types. FAY is the ANSI code for a 650-watt
PAR36 dichroic daylight bulb with ferrule contacts. If the bulb has screw
terminals, it is an FBE/FGK. With difusion, these units can be used as a
large-source soft light.
SOFT LIGHTS
Studio soft lights consist of one or more 1,000-watt or 1,500-watt bulbs
directed into a clamshell white painted refector that bounces light in a
random pattern, making a light which is apparently as large as the front
opening. They vary from the 1K studio soft (the Baby soft, also known as
a 750 soft) up to the powerful 8K Studio Soft, which has eight individually
switchable bulbs. Soft lights have certain basic problems: they are fairly
inefcient in their light output; they are bulky and hard to transport; and
like all soft sources, they are difcult to control. While the large refector
does make the light “soft,” the random bounce pattern makes the light still
somewhat raw and unpleasant and most people add a little light difusion.
Big studio softs through a large frame of difusion is a quick way to create
a large soft source in the studio. Often used with the studio soft is the
eggcrate, which minimizes side spill and does make the beam a bit more
controllable. Soft lights see most of their use in television studios where
they provide a soft source without additional rigging. However, tungsten
softlights in television news studios have been almost entirely replaced by
Kino Flo units for one simple reason: to save on air conditioning costs.
The color-correct fuorescent lights generate substantially less heat, which
can be a real problem for studios, where they might be in use 24 hours a
day. Since they are more or less permanently fown, their bulkiness is not
a problem. Small compact versions of the 2K and 1K soft lights are called
zip lights (Figure 12.20). They have the same width but half the height of Figure 12.19. (top) A 1K MolePar The
a soft light of similar wattage. Because of their compactness, zips are great bulb is changed to achieve diferent
beam spreads from very narrow to wide
for slipping into tight spaces.
Figure 12.20. (above) A on
BARGER BAGLIGHTS a wall hanger
Barger makes a type of softlight that is compact and efcient; it consists
of several 1K tubular bulbs in a housing. It is always used with a Chimera,
which is a self-contained softbox that fts on the front of the light. This
has many advantages. Normally to make a light soft, it is necessary to put
a difusion frame in front of it; then to control the spill, several fags are
needed. This means there might be as many as six stands. This becomes a
real problem when you need to move the light quickly. A softbox such as
a Chimera makes the entire unit ft on one stand. They are often used with
a soft eggcrate on the front, which helps control the spill.
COLOR-CORRECT FLUORESCENTS
Color-correct fuorescent tubes are lightweight, versatile, and have very
low power consumption. Pioneered by the Kino Flo company, they are
extremely lightweight, compact, and portable sources. Achieving a truly
soft light can be difcult and time-consuming, whether it’s done by bounc-
ing of a large white surface or by punching big lights through heavy dif-
fusion. Either way takes up a lot of room and calls for a lot of fagging to
control it.
lighting.sources..255.
Kino Flos had their origin in 1987. While working on the flm Barfy, DP
Figure 12.21. (above) A China ball (Chi- Robby Mueller was shooting in a cramped interior that didn’t leave much
nese Lantern) suspended from a C-stand
These are a cheap and extremely light- room for a conventional bounce or difusion soft source. His gafer Frie-
weight source of soft light and are easily der Hochheim came up with an answer: they constructed high-frequency
rigged or even foated on a boom pole fuorescent lights. By using remote ballasts, the fxtures were maneuver-
to be mobile in a scene
able enough to be taped to walls, and mounted behind the bar—Kino Flos
Figure 12.22. (right) A SoftSun 100 on were born (Figure 12.24). Kino Flo has now switched to all LED tubes for
a day exterior Note that the ballast on
the right is set on a piece of plywood their lights.
because the ground is wet (Photo cour- The ballasts are high-frequency, which reduces the potential problem of
tesy Attitude Specialty Lighting)
ficker that is always present with fuorescent type sources. Second, the
bulbs are truly color correct. Colored bulbs are also available for various
efects, as well as for greenscreen, bluescreen, or redscreen. Kino makes a
variety of extremely large rigs that can either frontlight or backlight an
efects screen. An added bonus of color-correct, high-frequency fuores-
cents is that they generate considerably less heat than either tungsten or
HMI, which is a great advantage in small locations.
OTHER TYPES OF UNITS
Besides Fresnels, open face, LED, and fuorescent sources, there are a
number of other kinds of lights that are commonly used for flm and video
lighting.
SOFTSUN
The SoftSun series of lights come in a variety of sizes from 3.3K to an
amazing 100K (Figure 12.22). SoftSuns require no warm up time. They
achieve maximum power and proper color temperature the moment they
are turned on. SoftSuns are also the only large daylight color light source
that can be dimmed with minimal shift in color temperature.
CYCS, STRIPS, NOOKS, AND BROADS
When just plain output is needed, broad lights are strictly no-frills, utilitar-
ian lights. They are just a box with a double-ended bulb. As simple as it
is, the broad light has an important place in flm history. In classical Hol-
lywood hard lighting, the fll near the camera was generally a broad light
with a difuser. The distinctive feature of the broad light is its rectangu-
lar beam pattern, which makes blending them on a fat wall or cyc much
easier: imagine how difcult if would be to smoothly combine the round,
spotty beams of Mighty Moles or Fresnel lights.
The smallest version of the broad is the nook, which, as its name implies,
is designed for ftting into nooks and crannies. The nook light is a com-
pact, raw-light unit, usually ftted with an FCM or FHM 1000-watt bulb.
The nook is just a bulb holder with a refector. Although barn doors are
usually available, nooks aren’t generally called on for much subtlety, but
256..cinematography:.theory.and.practice.
Figure 12.23. (top) A Rosco Maxi LED
softlight used for a product shot (Photo
courtesy Rosco)
Figure 12.24. (below) Color-correct fu-
orescents by Kino Flo Their light weight
makes rigging them easier and quicker
Notice how the large window at left
rear has been blacked out with a large
solid of black duvetyne A pipe has been
rigged to the ceiling to support some of
the lights Note also the snoot boxes on
the two tungsten zip lights (Photo cour-
tesy Kino Flo)
they are an efcient and versatile source for box light rigs, large silk over-
head lights, and for large arrays to punch through frames. A number of
units are specifcally designed for illuminating cycs and large backdrops.
For the most part, they are open face 1K and 1.5K units in small boxes;
these are called cycs, cyc strips, or Far Cycs (which create a more even dis-
tribution up and down the background).
CHINESE LANTERNS AND SPACELIGHTS
Chinese lanterns (China balls) are the ordinary paper globe lamps available
at houseware stores (Figure 12.21). A socket is suspended inside that holds
either a household bulb or a 1K or 2K lamp. Just about any rig is possible
if the lantern is large enough to keep the paper a safe distance from the
hot bulb. Control is accomplished by painting the paper, or taping gel or
difusion to it. Similar in principle are spacelights (Figure 12.26), which are
basically big silk bags with 1, 2, 6, or 12 1K nook lights inside. For estab-
lishing an even overall base level on a set, they can be quite useful. When
cabling, you will want to separate them into diferent circuits to give you
some degree of control over the level. China balls are inexpensive and easy
to rig.
lighting.sources..257.
Figure 12.25. Rigged for a night shoot SELF-CONTAINED CRANE RIGS
on Madam Secretary The beach always
has a breeze, so a traditional balloon There are a number of units that consist of several large HMIs rigged on
was too hazardous This “balloon box,” a crane. Most also carry their own generator. Musco was the frst of these,
uses the same bulb envelope as a bal- but now there are several to choose from. These units can provide work-
loon but on a rigid frame Difusion is
afxed to a Speed Rail frame, attached able illumination up to a half mile away.
to an 80-foot articulating boom lift This
rig contains two 2 5k HMI, and four 2k ELLIPSOIDAL REFLECTOR SPOTS
Halogen bulbs—a hybrid The dimmer
and ballast stay on the ground; a single The ellipsoidal refector spot (ERS) is a theatrical light, but it is used as a small
multi-cable conductor supplies power efects light because of its precise beam control by the blades. Called lekos
to the lamps Rollers attached to the
arm allow the cable to pay out The in the theater, on a flm set you will frequently hear them referred to as
lamp operator stays at ground level to Source Fours, manufactured by Electronic Theater Controls (ETC)—Figure
control the entire fxture It’s cool and
safe (Photo courtesy Michael Gallart) 12.27. Because the blades and gobo holder are located at the focal point
of the lens, the beam can be focused sharply and patterned gobos can be
inserted to give sharply detailed shadow efects. These lights come in a size
defned by their beam angle. The longer the focal length, the narrower the
beam. They also make a unit that has a zoom. Some ERS spots have a gobo
slot that holds a metal disk that will project a pattern. These patterns come
in a vast array of designs from random breakup patterns to very specifc
things such as the outline of a window or a tree.
BALLOON LIGHTS
Balloon lights provide a powerful and fexible new tool for night exteriors.
They can be either HMI or tungsten sources or even a hybrid of the two
as in Figure 12.25. They generate a soft, general fll light for large areas.
Perhaps their greatest advantage is that they are much easier to hide than
a crane or scafolding. They are also faster to set up and to move. The
disadvantage is that they can be very time-consuming and expensive to
gel. Wind can always be a factor when fying balloon lights. The smaller
258..cinematography:.theory.and.practice.
the balloon, the lower the acceptable wind speeds. A good reference is to
observe fags: if they’re fapping straight out, it’s too windy. This intro-
duces an element of uncertainty into their use. Larger balloon lights usu-
ally come with an operator.
HANDHELD UNITS
Portable handheld, battery-operated units are generally called sunguns.
There are two basic types: tungsten and HMI. Tungsten sunguns are
either 12 volt or 30 volt and powered from battery belts—due to their
inefciency and short battery time; they are seldom used nowadays. Typ-
ically, a tungsten sungun will run for about ffteen minutes. Sunguns
with HMI bulbs are daylight balance and more efcient in output than
tungsten units. These have largely been replaced by LED light panels,
many of which can operate of batteries for extended periods of time.
EQUIPMENT FOR DAY EXTERIORS
Direct sun is extremely harsh and contrasty and also is moving through- Figure 12.26. (top) Two sizes of space
lights rigged on Speed Rail (aluminum
out the day. To deal with this, day exteriors can be approached in three tubing often used for grip rigs of all
ways: flling with large daylight balance units such as a large HMI, sorts) (Photo courtesy Jon Fauer, Film
and Digital Times)
bouncing the existing light with refectors, or covering the scene with
a large silk to control the contrast. Sometimes it is some combination Figure 12.27. (above) A Source Four
(Figures 12.28 and 12.30). leko (ellipsoidal refector spot or ERS)
Much of the time, lighting day exteriors will depend more on grip
equipment such as silks, nets, and fags in addition to whatever lights or
refectors may be working on the scene. Although artifcial silk is one
covering for large frames, it is also a generic term and coverings such as
Half Soft Frost, Hi-Lite, and 1/2 or 1/4 Grid Cloth are more frequently
used for these situations. Refectors are often used in exteriors—they are
cheap, don’t need power, and match the sun as it changes in intensity due
to such factors as cloud cover. Any time refectors are used, a grip needs
to stand by to “shake it up” before every take. The reason is simple—
the sun is constantly moving and they need to be re-aimed before every
take. Sometimes it can be difcult to tell exactly where the refector is
aimed—a simple trick is to aim it down at the ground in front of you.
This lets you more easily follow the refected beam as you focus it on the
action.
lighting.sources..259.
IMPROVISING LIGHTS
There are two reasons to build your own lights. One is because you just
don’t have equipment available to you. The other is for special purposes—
you need something for a special purpose or to ft into an odd space, or
maybe you need something you can “hide in plain sight,” or you need a
unit to do a particular job and you just don’t have something on the truck
that will do the trick.
BOX LIGHTS
Basic lighting units can be made with two simple materials: foamcore and
porcelain sockets. This can be useful if you just don’t have the budget for
lighting equipment, but they are also sometimes used on professional sets
to make lights that can be concealed by ftting in to odd spaces or even
disguised to be hidden in plain site on the set. Foamcore is available at any
flm supplies store or more locally at any art supply store. When making
lights, white on two sides is most commonly used. When buying foamcore
to be used as grip equipment (bounce boards or negative fll) white on one
Figure 12.28. (top) A day exterior with side and black on the other is the best choice. Figure 12.31 shows a lighting
negative fll (the black 12x12 solid that unit made for a few dollars.
creates a shadow side) and shiny board
refectors that are aimed through 4x4
frames with difusion The fll is an 8x8 CHRISTMAS TREE LIGHTS
frame with Ultrabounce, a refective White Christmas tree lights or fairy lights can be very useful on the set if
material The shiny boards are neces-
sary in order to raise the level of the you are making your own equipment. They have a number of uses, but
actors so that they are not signifcantly the most common is to use them to light the interior of cars. They can be
darker than the background
taped to the overhead for a soft glow, tucked behind the steering wheel to
Figure 12.29. (above) Knowing how put a soft light on the driver that mimics the glow of the instrument panel.
to rig lights on a crane or lift is essen-
tial for all electricians and grips In this Figure 12.32 is a frame from Kubrick’s Eyes Wide Shut. The entire scene is
rig, two Mole-Richardson Maxi Brutes lit only with the fairy lights you see and to a lesser extent by the chande-
are mounted on candlesticks which
are secured to the basket of the crane liers. This makes it possible to show the entire room as there is no need to
(Courtesy P&G Lighting) try to hide conventional lights.
260..cinematography:.theory.and.practice.
PROJECTOR BULBS Figure 12.30. (top) A day exterior
(Photo courtesy Nicholas Calabria)
Although designed for use in slide projectors the compact size of these
bulbs make them useful in many situations. They can be used to build Figure 12.31. (above) A box light made
your own units, or built into the set. They are available in a wide range of from foamcore The LED mushroom
bulb is held by a socket adapter through
wattages and most usefully, in several diferent voltages. 120 volt bulbs are a hole in the foamcore base
lighting.sources..261.
useful on the set, and 12 volt and 24 volt bulbs are useful for “tricking out”
a car, boat or small aircraft. To “trick out” a vehicle means attaching to the
battery and running hidden wire to places that might be used for lighting
rigs (Figure 12.33).
FOR MORE INFORMATION ON LIGHTING
Lighting is a vast subject; here we have room only to cover the basics. For
more on lighting techniques, photometric data, grip equipment and rig-
ging, electrical distribution, bulbs, and scene lighting examples, see Motion
Picture and Video Lighting—3rd Edition by Blain Brown, also published by
Focal Press.
262..cinematography:.theory.and.practice.
13
lighting
Figure 13.1. Birdman was lit almost THE FUNDAMENTALS OF LIGHTING
entirely with practical lamps
Lighting has nearly infnite permutations and variations. There is certainly
no one “right” way to light a scene. As a result, there is no chance that we
can just make a simple list of “proper” lighting techniques. What we can
do, however, is try to identify what it is we want lighting to do for us.
What jobs does it perform for us? What do we expect of “good” lighting?
Starting this way, we have a better chance of evaluating when lighting is
working for us and when it is falling short. Naturally, these are generaliza-
tions. There are always exceptions, as there are in all aspects of flmmak-
ing.
THE [CONCEPTUAL] TOOLS OF LIGHTING
In the frst chapter, we talked about the conceptual tools of cinematogra-
phy; lighting is, of course, one of the most important of those tools. Now
let’s talk about the conceptual tools of lighting—the attributes of light:
the things about light that we can change and manipulate to achieve our
goals. They are:
THE ATTRIBUTES OF LIGHT
• Hard vs. soft light
• Altitude (height)
• Direction (from front, side, or back)
• Color
• Focus (confned or wide)
• Texture (break up patterns)
• Movement
• Intensity/Contrast
HARD VS. SOFT
A key aspect of the quality of light mostly is how hard or soft it is. This is,
in fact, the aspect of light that we most often alter. “Hard” light means
specular light—parallel beams which cast a clear, distinct shadow. Soft light
is the opposite; it is difuse light that hits the subject from many diferent
angles and thus casts an indistinct, fuzzy shadow, if any. We’ll go into
more detail a little later.
264..cinematography:.theory.and.practice.
WHAT ARE THE GOALS OF GOOD LIGHTING? Figure 13.2. Carrying a lamp in Skyfall
The light on Bond appears to be coming
So what is it we want lighting to do for us? There are many jobs, and they from the table lamp, but it’s not
include creating an image that has:
• A full range of tones and gradations of tone.
• Color control and balance.
• Shape and dimension in the individual subjects.
• Separation: subjects stand out against the background.
• Depth and dimension in the frame.
• Add emphasis and focus.
• Texture.
• Mood and tone: emotional content.
• Exposure.
• Balance.
• Shadows.
• Visual storytelling.
• Visual metaphor.
• Reveal and conceal.
• Invisible Technique.
Lighting can also help form the composition and most importantly, it can
help you tell the story. The goal is to have your cinematography help tell
the story, establish a mood and tone, and add up to a coherent visual pre-
sentation. As any working cinematographer will tell you, lighting is usu-
ally the most important aspect of the visual efect.
FULL RANGE OF TONES
In most cases, we want an image to have a full range of tones from black to
white (tonal range is always discussed in terms of grayscale, without regard To me if there’s an achievement to
to color). There are exceptions to this, of course, but in general, an image lighting and photography in a flm
it’s because nothing stands out, it
that has a broad range of tones, with subtle gradations all along the way, is all works as a piece And you feel
going to be more pleasing to the eye, more realistic, and have more impact. that these actors are in this situa-
The range of tones in a scene is dependent on what is actually in a scene, tion and the audience is not thrown
the colors, and the textures, but it is also a product of the lighting—fat by a pretty picture or by bad light-
ing
front lighting (Figure 13.11) will tend to make everything dull and low con-
trast, the main reason we almost always avoid lighting this way. Lighting Roger Deakins
with lots of shadows and highlights will increase the contrast of a scene (Skyfall, Fargo,
The Big Lebowski)
and result in a broader range of tones. Naturally, this may not be the fnal
image structure you are aiming for—there are occasions where you want
a low-contrast, dull appearance to a scene, but this happens far less often.
lighting..265.
Figure 13.3. (top) Imitating a street COLOR CONTROL AND COLOR BALANCE
light on this twilight exterior in Ocean’s
Eleven Street lights are green, but not Up until the eighties, it was conventional to precisely color balance all
really this green lighting sources for example, making all the lighting sources tungsten bal-
Figure 13.4. (above) Shadows are a key
ance, daylight balance, or even going with fuorescent balance and then
element of this shot from Blade Runner correcting the color in postproduction. Now it is common to mix difer-
The shadows of the Venetian blinds add ent color sources in a scene. Color control is also important in the mood
textue to the shot
and tone of a scene (Figure 13.3).
SHAPE
Flat front lighting (Figure 13.11) does not reveal the shape and form of the
subject. It tends to fatten everything out, make the scene two-dimen-
sional. Lighting from the side or back reveals the shape of an object—its
texture and subtleties of form. This is important not only for the overall
depth of the shot, but it also can reveal character, emotional values, and
other clues that may have story importance. It also makes the image more
real, more palpable, more recognizable.
SEPARATION
By separation, we mean making the main subjects “stand out” from the
background. A frequently used method for doing this is a backlight.
Another way to do it is to make the area behind the main subjects signif-
cantly darker or brighter than the subject. In our quest to make an image
as three-dimensional as possible, we usually try to create a foreground,
midground, and background in a shot; separation is an important part of
this—Figures 13.5 and 13.6.
266..cinematography:.theory.and.practice.
Figure 13.5. (top) Here, the actress is
lit only with a soft key; the scene is fat,
two-dimensional, and doesn’t seem
real
Figure 13.6. (bottom) Lighting can
create depth and three-dimensionality
in a scene The addition of a backlight, a
practical lamp, and lighting through the
window and in the hallway add sepa-
ration, makeing the scene more three-
dimensional and realistic
DEPTH
Whether projected on a screen or viewed on a monitor, flm and video are
two-dimensional: fat (3D is really just an illusion). A big part of our job
is trying to make this fat art appear as three-dimensional as possible—to
give it depth and shape and perspective, to bring it alive as a real world as
much as possible. Lighting plays a huge role in this. This is a big part of
why “fat” lighting is so frequently the enemy. Flat lighting is light that
comes from very near the camera, like the fash mounted on a consumer
still camera: it is on axis with the lens. As a result, it just fatly illuminates
the subject evenly. It erases the natural three-dimensional quality of the
subject.
TEXTURE
As with shape, light from the axis of the lens (fat lighting) tends to obscure
surface texture of materials. The reason is simple: we know texture of the
subject from the shadows. Light that comes from near the camera creates
no shadows. The more that light comes from the side, the more it creates
shadows, which is what reveals texture. Texture can also be present in the
lighting itself (Figure 13.4).
MOOD AND TONE
Let’s recall our discussion of the word “cinematic.” Used in conversation,
it is often used to describe something that is “movie-like.” For example,
someone might say a particular novel is cinematic if it has fast-moving
action, lots of description, and very little exposition. That is not how we
will use the term here. In this context, we will use the term cinematic to
describe all the tools, techniques, and methods we use to add layers of
meaning, emotion, tone, and mood to the content.
lighting..267.
Figure 13.7. A big soft overhead source As every good camera and lighting person knows, we can take any par-
created by bouncing lekos into panels ticular scene and make it look scary or beautiful or ominous or whatever
Because of the panels, it is not a directly
overhead source and there are fewer the story calls for, in conjunction with use of lens and camera, of course.
shadows on her face (Courtesy Interna- Many tools afect the mood and tone of a scene: color, framing, use of
tional Design School)
lens, frame rate, handheld or mounted camera—indeed everything we can
do with camera and lighting can be used to afect the audience’s perception
of the scene.
SHADOWS
Never forget that the shadows can often be as important as the light (Figure
13.8). One characteristic of early flms before the flm noir era and then
again in the Technicolor phase of flmmaking was that they were over-
lit—no shadows anywhere. Overlighting a scene is not only completely
unrealistic, it also lacks contrast, defnition and the focus on key elements
of the scene that a controlled play of light and shadow can bring.
REVEAL AND CONCEAL
Light can serve a specifc story purpose, especially when it is used to con-
ceal or reveal some narrative element. The most obvious example is in
horror flms when the maniac is concealed in the shadows until the baby-
sitter turns on the basement light.
EXPOSURE AND LIGHTING
It is important to remember in this context that exposure is about more
than just “it’s too light” or “it’s too dark.” Exposure for mood and tone is
obvious, but there are other considerations as well—proper exposure and
camera settings are critical to color saturation and achieving a full range
of grayscale tones. There are really two ways in which you have to think
about exposure. One is the overall exposure of the scene; this is controlled
by the iris, the shutter speed, gain, and neutral density flters. All of this
controls exposure for the entire frame. Except for some types of neutral
density flters called grads, there is no chance to be selective about a cer-
tain part of the frame. Another aspect of exposure is balance within the
frame. Film and video can only accommodate a certain brightness range.
Keeping the brightness range within the limits of your particular flm or
video camera is mostly the job of lighting. It’s not merely a technical job
of conforming your lighting to the available latitude: the lighting balance
also afects the mood, tone, and style of the scene.
268..cinematography:.theory.and.practice.
SOME LIGHTING TERMINOLOGY Figure 13.8. Chiaroscuro in Schindler’s
List A single soft side light leaves most
• Key light: The dominant light on people or objects. The “main” of the scene in ominous shadow
light on a scene.
• Fill light: Light that flls in the shadows not lit by the key light.
Lighting is sometimes described in terms of the key/fll ratio; also
called the contrast ratio.
• Backlight: Light that hits a person or objects from behind and
above. A rim or edge light might be added to separate a dark side of
a face or object from the background or make up for a lack of fll.
• Kicker: A kicker is a light from behind that grazes along an actor’s
cheek on the fll side (the side opposite the key light). Often a
kicker defnes the face well enough that a fll is not even neces-
sary. It should not be confused with a backlight, which generally
covers both sides equally.
• Sidelight: A light comes from the side, relative to the actor. Usu-
ally dramatic and creates great chiaroscuro (meaning light and
shadow) if there is little or no fll, but may be a bit too harsh for
close-ups, where some adjustment or slight fll might be needed.
• Topper: Light directly from above. (The word can also refer to a
fag that cuts of the upper part of a light.)
• Hard light: Light from the sun or small lighting source such as a
Fresnel that creates sharp, well-defned shadows.
• Soft light: Light from a large source that creates soft, fuzzy shad-
ows or (if soft enough), no shadows at all.
• Ambient light: There are two uses of this term. One means the
light that just happens to be in a location. The second use refers to
soft, overhead light that is just sort of “there.” Can also be a base
light that opens up the shadows.
• Practicals: Actual working prop lights—table lamps, foor
lamps, sconces, and so on. It is essential that all practical lamps
have a dimmer on them for fne-tuning control; small dimmers
for this purpose are called hand squeezers. Anything on a set that
works is also called practical, for example, a refrigerator.
• Upstage/Downstage: Part of the scene on the other side of the
actors, opposite the side the camera is on. Downstage is the side
the camera is on.
lighting..269.
Figure 13.9. (top) Hard light creates
sharp, well-defned shadows This is a
Mighty Mole (open face 2K) direct An
open face light would rarely be used
directly on an actor without difusion
or being bounced of a refector—it is
used here for illustration only
Figure 13.10. (below) Soft light cre-
ates soft shadows that fall of gradually
In this case, the soft light is created by
the same Mighty Mole being punched
through a 4x4 frame covered with 216
(a heavy difusion) and another 4x4
frame covered in muslin—a heavy
cotton cloth that has been used as dif-
fusion since the earliest days of the flm
business Notice that creating a softer
source has also turned the objection-
able hotspot highlights on her skin
into soft, gentle highlights in the lower
photo
• High key: Lighting that is bright and fairly shadowless with lots
of fll light; often used in fashion/beauty commercials.
• Low key: Lighting that is dark and shadowy with little or no fll
light. Can also be described as having a high key/fll ratio.
• Bounce light: Light that is refected of something—a wall, the
ceiling, a white or neutral surface, a silk, or just about anything
else; usually to make a light softer.
• Available light: Whatever light already exists at the location.
May be natural light (sun, sky, overcast day) or artifcial (street
lights, overhead fuorescents, etc.).
• Motivated lighting: The lighting appears to have a source such
as a window, a lamp, a freplace, and so on. In some cases the light
will come from a source visible in the scene; in some cases, it will
only appear to come from a source that is visible in the scene.
WORKING WITH HARD LIGHT AND SOFT LIGHT
What makes hard light hard? What makes soft light soft? How do we dis-
tinguish between them? There are many ways to light any scene; the varia-
tions are endless. The styles and techniques of lighting are nearly infnite.
Oddly, there are basically only two types of light (in terms of what we are
calling “quality” of light) when you really boil it down to the basics: hard
light and soft light. There are, of course, all sorts of subtle gradations, and
variations between completely hard and fully soft.
270..cinematography:.theory.and.practice.
Figure 13.11. (top) Flat front lighting
creates no depth, no sense of three-
dimensionality It looks fake and
“lit”—something we try to avoid
Figure 13.12. (below) Light from the
sides or back (anything other than
fat front lighting) creates depth,
dimension, a more realistic feel On
the left side of his face, a kicker has
been added to create more shape
and a greater sense of depth and
separation
HARD LIGHT
Hard light is also called specular light. As we have seen, it is light that casts
a clear, sharp shadow. It does this because the light rays are traveling rela-
tively parallel. What creates a beam of light with the rays pretty much par-
allel? A very small light source. The smaller the source, the harder the light
will be. This is an absolutely crucial point: how hard or soft a light appears
is a function of the size of the radiating source (Figures 13.9 and 13.10).
Outside on a clear, sunny day, take a look at your shadow: it will be
sharp and clean. Even though the sun is a large star, it is so far away that it
appears as a small object in the sky—which makes it a fairly hard light. It is
important to remember that it is not the absolute (actual) size of the source
that matters—it is the size of the source relative to the subject.
In this example, the sun is acting as a hard light because it is a small, point
source from our point-of-view. In reality, the sun is huge, of course, it is the Light can be gentle, dangerous,
fact that it is millions of miles away that allows it to function as a hard dreamlike, bare, living, dead, misty,
clear, hot, dark, violet, springlike,
source on cloudless days. The importance of this is sometimes seen when falling, straight, sensual, limited,
an inexperienced crew builds a large soft source with silks or bounce and poisonous, calm and soft
then positions it a long way away from the subject—it may still be soft, Sven Nykvist
but it will not be nearly as soft as it would be if it was much closer to the (Cries and Whispers,
subject (the actors, typically). On sets, when they are going for a really soft Chaplin, Persona)
look, you will most often see the large soft source just barely outside the
frame. Of course, this often means that it can’t be there for the wide shot
(typically, the master) and has to be moved in closer for the close-ups. This
is common practice, and a good crew can make quick work of it; however,
it means that when you build a large soft source and “box it in” with fags
to control spill and prevent faring the lens, your electricians and grips
need to keep the need for mobility in mind and be ready to go when the
call comes out to move in for the tighter shots.
lighting..271.
Figure 13.13. (top) Upstage is on the SOFT LIGHT
other side of the actors, away from the
camera Light from the upstage side Soft light is the opposite; it is light that casts only a fuzzy, indistinct
gives pleasant shadows and is fattering shadow; sometimes no shadow at all. What makes light soft? A very large
to the face source. Go outside on an overcast or cloudy day and you will have little or
Figure 13.14. (above) Downstage is no shadow at all. This is because instead of a small, hard source ( just the
on the same side of the actors as the sun), the entire sky is now the light source—it’s enormous. See Figures
camera Lighting from the downstage 13.12 and 13.23 for examples of hard and soft light compared.
side is unpleasant for the face, puts
the shadows in the wrong place, and is How do we make soft light on the set? There are two ways. One is we
more fat front lighting—something to
almost always avoid bounce a light of a large white object. Typically we use things like foam-
core (a lightweight artist board often used for temporary signs or mount-
ing photographs) or soft materials such as cotton muslin or Ultrabounce—a
cloth-like material designed specifcally for this purpose—but you can use
272..cinematography:.theory.and.practice.
Figure 13.15. Vermeer’s Girl With a
Pearl Earring is an excellent example
of contained soft light—controlled so
that it doesn’t spill all over the set and
stays only where you want it Vermeer
achieved this efect with the unique
windows of a typical house in Amster-
dam; you may need to have the grips
contain the light for you
INTENSITY
How bright or intense a light is clearly afects exposure, but remember
that no matter how bright or dark the overall light of a scene is (within
limits), we can adjust it by exposing correctly with the iris, shutter, or
neutral density flters. What is important here is the relative intensity of
diferent lights within a scene, the relative balance of the various lights.
These are really two completely diferent ways to think about the inten-
sity and exposure of lighting in a scene: the overall lighting level and then
the comparative diference between lights in a scene—which is usually
referred to as the contrast ratio between the key and fll but also applies to
objects in the frame that generate their own light—windows, lampshades,
candles, etc.
TEXTURE IN LIGHTING
Texture occurs in several ways. One is the inherent texture of the subject
itself, but the one that concerns us here is texture of the light itself. This Lighting is so complex that it’s hard to
quantify It’s like playing piano How
is done by putting things in front of the light to break it up and add some did I do that? What did my fingers do?
variation of light and shadow. Things you put in front of the light are What made me think about where
called gobos, and a particular type of gobo is the cuculoris or cookie, which they should go? I like to equate
cinema to music I’m performing a
comes in two types. musical composition when lighting a
Hard cookies are plywood with irregular cutouts. Soft cookies are wire mesh scene There are crescendos, allegros
and pizzicatos The visual language
with a subtle pattern of translucent plastic. Temporary cookies can be cut is an undulating language, and, like
from foamcore, show card, or almost anything that’s available at the time. music, it has to have its peaks and
Other tricks include putting a shadow-casting object in front of the light; valleys
traditionally these include things such as vertical charlie bars—vertical bars Conrad Hall
used to create shadows. This efect can also be accomplished with strips (American Beauty,
of tape on an empty frame. Another method is to put lace in a frame and Cool Hand Luke,
place it in front of the light. In Cold Blood)
COLOR
Color is such a large and important issue that we devoted an entire chapter
to the subject earlier in the book. There are several aspects to the subject
of color as we use it in flmmaking:
• The image-making side of color: how we use it to make better
and more impactful images.
• The storytelling aspect of color; that is the emotional, cultural
context of color. One of the best references on this topic is If
lighting..275.
Figure 13.19. Motivated lighting in O it’s Purple, Someone’s Gonna Die, by Patti Bellantoni, a fascinating
Brother, Where Art Thou?, photographed
by Roger Deakins look at using color in flmmaking.
Figure 13.20. The light on Brad Pitt is • Color is important at the camera—choice of flm stock or video
motivated by the fuorescent lamp in camera setup, the use of LUTs and Looks to manipulate color in
the shot, but it’s highly unlikely that it the camera or at the DIT cart.
provides the actual light on him There’s
no way you could get exposure out of • Controlling color at the light source, which involves both using
two fuorescent tubes from that far
away color-balancing gels, party gels (random colors made by the gel
manufacturers), and the choice of the proper lighting units.
LIGHTING TECHNIQUES
The fundamental methods are discussed in detail later. They include:
Have a clear vision, design and
objective for every scene Then, by • Ambient & available light.
lighting with your instincts along
with your intention and setting • Classical lighting.
your own level of excellence, you • Through the windows.
will fnd satisfaction
Rene Ohashi • Practicals and motivated lighting.
(Forsaken, Faces In The Crowd)
AMBIENT
The term ambient has two meanings. On location, it means light that is
“just there.” In lighting on a set, it means an overall fll that is added, usu-
ally from big soft overhead sources. Ambient is just going with whatever
light is there when you arrive—street lights, daylight, windows, skylights,
etc., such as in Figures 13.56 and 13.58.
276..cinematography:.theory.and.practice.
Figures 13.21 and 13.22. (above) Two
sides of scene lit with back cross keys:
one light on each side is one actor’s
key and the other actor’s backlight
Both lights are from the upstage side
In this case, the woman’s key is coming
through a window and some lace to
add texture His key is hard without any
softening or texture See the website for
more on this lighting technique Note
also the matching eyelines—crucial in a
dialog scene such as this
Figure 13.23. (left) The basic compo-
nents of lighting a person Although it’s
often called three-point lighting, most
often there are more lights than that,
sometimes fewer
CLASSICAL LIGHTING
Classical lighting refers to the hard light style that was the norm in big
movie production for decades. Some refer to it as Film Noir lighting. It I like simplicity I like using natural
sources I like images to look natu-
employs hard light Fresnels individually aimed for the actors, objects on ral—as though somebody sitting
the set, the background, and so on, (Figure 13.18). The method is still in a room by a lamp is being lit by
used, although soft light on the actors is more frequently used. that lamp
Roger Deakins
BRINGING IT THROUGH THE WINDOWS (The Big Lebowsky,
Many DPs say that they “bring it through the windows” whenever pos- Barton Fink, 1917)
sible—a more naturalistic look; it also frees the set of stands and cables,
making setup changes quicker and more efcient (Figures 13.25, 13.26,
and 13.27).
PRACTICALS AND MOTIVATED LIGHTING
Motivated lighting and practicals are closely related. Practicals are lamps and
other sources that actually work as in Figure 13.1. Figures 13.19 and 13.20
show a scene lit by a practical table lamp. These same illustrations show
motivated lighting, which appears to come from the practicals (or windows,
doors, etc.) but is actually coming from lighting units.
lighting..277.
FIgure 13.24. Shafts of light through a BASIC PRINCIPLES OF LIGHTING
fan create a dynamic composition in 9
1/2 Weeks Some basic principles:
• Avoid fat front lighting! Lights that come more from the sides
and back are usually the way to accomplish this. Any time a light
is right beside or behind the camera, that is a warning sign of pos-
sible fat, featureless lighting.
• Whenever possible, light people from the upstage side. This is
probably the most important principle of lighting along with
avoiding fat front lighting.
• Use techniques such as backlight, kickers, and background/set
lights to separate the actors from the backgrounds, accentuate the
actor’s features.
• Use shadows to create chiaroscuro, depth, shape the scene and
mood. Some DPs say that “...the lights you don’t turn on are as
important as the ones you do turn on.”
• Use lighting and exposure to have a full range of tones in the
scene—this must take into account both the refectances of the
scene and the intensity of the lighting.
• When appropriate, add texture to your lights with gobos, cook-
ies, and other methods. Some DPs, such as Wally Pfster almost
always add some texture to their key lights.
LIGHT FROM THE UPSTAGE SIDE—REVERSE KEY
One of the most universal principles of lighting is to light from the upstage
Pretty photography is easy: it’s side as in Figures 13.48,13.49, and 13.50. When the light is on the down-
really the easiest thing in the world stage side of the actors, that means it is near the camera which results in fat
But photography that rounds a pic-
ture of, top to bottom, and holds front lighting, something you want to avoid.
the content together is really the
most beautiful That means it can BACK CROSS KEYS
be visually very beautiful; it can One of the most useful and commonly used techniques is back cross keys. It’s
also be pedestrian in certain ways
because that is more appropri- simple, straightforward, and fast but also very efective. Take a look at the
ate to the story You try not to put next ten dialog scenes you see in feature flms, commercials, or television:
the photography in front of the there’s a good chance that most or even all of them will be lit with this
story—you try and make it part of technique (Figures 13.42, 13.48, 13.49, and 13.50).
the story
The idea is simplicity itself. For a two-person dialog scene (which consti-
Gordon Willis tutes a large majority of scenes in most flms), one upstage light serves as
(Annie Hall,
The Godfather,
one actor’s key light and also the second actor’s backlight. A second light
The Money Pit) does the opposite: it is the second actor’s key light and the frst actor’s
backlight. That’s all there is to it, but you may want to add some fll, back-
lights, or whatever else the scene calls for.
278..cinematography:.theory.and.practice.
Figure 13.25. (above) This dark, moody
scene is lit primarily from the windows
with four Maxi Brutes through Hilight
difusion, supplemented by the library
lamps on the table A very slight smoke
efect is added as mild difusion In cases
like this, it is important to control the
smoke level so that it doesn’t become
noticeable in the shot
Figure 13.26. (left, above) This scene
from Minority Report is lit entirely
through the windows as Janus Kamin-
ski frequently does Smoke makes the
shafts of light visible, if you don’t want
to see them, just avoid smoke on the
set
Figure 13.27. (left, below) The two shot
from the above scene There is one addi-
tion to the lighting—a kicker catches
the right side of Max von Sydow’s face
lighting..283.
Figure 13.46. Cinematographer Robert AVAILABLE LIGHT WINDOWS
Richardson is know for his use of table
bounce—aiming a strong contained Window light can be some of the most beautiful light of all, if you use it
light straight down at the table and let- right. Windows can also be disastrous—for example, if you place an actor
ting the soft bounce be the only source
on the actors There is no explanation against a window and the camera is placed so that the actor is in front with
for why a light would be doing this in a the view out the window behind them. In normal daylight conditions,
small house, but nobody cares this will produce a very high-contrast situation, and you have to choose
Figure 13.47. DP Gordon Willis is between having the outside view properly exposed and the actor in com-
famous for using soft overhead light
and nothing else in scenes on The God- plete silhouette or exposing for the actor and having the window view
father He deliberately did not add fll or overexposed and blown out.
an eyelight in order to maintain a sense
of mystery—nobody knows what Vito There are workarounds, of course. One is to use a big light on the actor
Corleone is actually thinking to bring up their exposure. This generally requires a very large unit and it
can be difcult to make it look natural; also, the light has to be daylight bal-
ance. The other alternative is to bring down the exposure of the window.
The easiest way to do this is to gel the window with ND (neutral density
gel), which can reduce the light coming through the window by one, two,
three, or even four stops (the ND designations for this are ND .3, .6, .9,
and 1.2). If you want to use tungsten balance lights inside the room, you
can also add 85 gel, which converts daylight (5600K) to tungsten balance
284..cinematography:.theory.and.practice.
Figure 13.48. (top) A dialog scene with
fat front lighting—no depth, no con-
trast, no “shape,” in other words, boring,
inefective lighting It also doesn’t look
very natural
Figure 13.49. (middle) Back cross keys
give the scene shape, depth, contrast,
and a fuller range of tones It helps
create a foreground, midground, and
background to create that sense of
depth—this is also helped by the prac-
tical lamp as a compositional element
Figure 13.50. (bottom) A diagram of
back cross keys Each light efciently
serves as both a key and a backlight
A typical setup of this method might
also include a fll light, backlights, and
other units Importantly, both of these
lights are on the upstage side—on the
side farther away from the camera Flat
front lighting results from lights being
too much on the downstage side, near
the camera This may not apply for a
key light on someone facing toward the
camera, but the lighting will still be fat
if the light is very close to the camera
Upstage
Downstage
Lighting With
Back Cross Keys
I had a philosophy, which I used more
in Godfather II than in the first one,
that I didn’t care whether I saw their
(3200K). Or you can use gel that combines them: 85N3, 85N6, etc. But eyes or not It seemed more appro-
there is another alternative: change the shot. This not only makes lighting priate not to see their eyes because
easier, but it will generally produce a better-looking shot. If the director is of what was going on in their heads
at certain moments I had a lot of
fexible about the staging, you’ll get a better image by placing the camera trouble with that from traditionalists
so that the actor is beside the window and the background of the shot is I said: “That’s the way it is because I
no longer the view out the window, but rather something inside the room. think it is appropriate at this moment
In another scene, you will see their
What is it that makes window light so beautiful? We have to distin- eyes because it is appropriate to that
guish between window light, sky light, and sunlight. Many people think of moment ” Hollywood is full of rheto-
ric—they never really see that they
window light being “sunlight.” Direct sun is hard and contrasty. Sky light are looking at
is the light coming from the sky itself, which is a huge radiating source Gordon Willis
and thus very soft. Also coming through a window might be sun bounced (The Godfather I, II, and III,
of neighboring buildings, the ground, etc. All of this adds up to make Annie Hall)
window light extremely soft, consistent and “smooth.”
lighting..285.
Figure 13.51. (above) The fnal medium
shot in this scene is lit by a hard light
coming from the right However, it
doesn’t light the actor directly (except
from mid-chest down) It is the bounce
of the map on the desk that lights his
face in an atmospheric, mysterious and
expressive way (Photo courtesy Noah
Nicolas Matthews)
Figure 13.52. (right, top) This scene by
Noah Nicolas Matthews uses Mole Beam
Projectors to create sharp, focused,
specular hard light to punch through
the windows of the set Note also the
smoke machine—smoke is necessary
if you want to see the beams of light
(Photo courtesy Noah Nicolas Mat-
thews)
Figure 13.53. (right, bottom) Inside the
set, we see the results of those beam MOTIVATED LIGHT
projectors and the smoke At the far end Light in a scene may come from many sources, including lights that are
of the set is the actor in front of a small
green screen (Photo courtesy Noah actually in the frame such as practicals, windows, skylights, signs, and so
Nicolas Matthews) on. In some cases, these sources are visible but do not provide enough
output for proper exposure. In this case, the sources may only serve to
motivate additional lighting that is of-screen. Some cinematographers
and directors prefer that most or all lighting in a scene be motivated in this
way—that the viewer should be able to intuitively understand where the
light is coming from.
CARRYING A LAMP
Often we want the lamp to appear to be lighting the subject, but for some
reason, it just won’t do it. If we turn the lamp’s brightness up enough to
light the actor, then the shade will be completely blown out; or it might
be that the actor just isn’t close enough to be properly lit by the lamps. In
this case, we use a technique called carrying the lamp. To do this, we set
a small lamp in a place where it will hit the actor from the same direction
as the light from the lamp. It also needs to be the same quality of hard or
soft and the same color; table lamps tend to be on the warm side, often
about 2800K or warmer. Figures 13.38 and 13.39 show a modern take on
a flm noir look that employs a diferent method of carrying a lamp. Here
the lighting is actually very simple: it’s a Tweenie (650-watt Fresnel) that is
bouncing of the piece of paper in the typewriter. A Betweenie (300-watt
Fresnel) gives the actor a backlight, and a second one puts a small splash on
the map behind him.
286..cinematography:.theory.and.practice.
Figure 13.54. (top) Direct sunlight is
harsh, contrasty, and unfattering If you
do have to shoot in direct sun, try not to
do so during the middle part of the day
Sunlight is softer and at a lower, more
fattering angle early in the day or late
in the afternoon
Figure 13.55. (bottom) Here we just
have the actor back up a few feet so
he is under the awning of the building
This is open shade; it is softer and less
contrasty Notice how it also creates a
better balance between the actor and
the building in the background As a
bonus, the actor isn’t tempted to squint
DAY EXTERIORS
Working with daylight can be a lot trickier than people think. Some pro-
ducers and directors believe that working with available daylight will
always be faster. Sometimes, but not usually. If it’s an overcast day (soft
light), then nothing could be simpler. If you are dealing with direct sun,
controlling it can require constant attention and adjustment. When deal-
ing with actors in direct sun, you have several choices: difusion of the
harsh sun, flling and balancing the shadows, fnding a better location or
angle for the shots, or moving the shot into open shade. See video exam-
ples on the website.
FILL
You can use bounce boards or lights to fll in the shadows and reduce the
contrast. Grip refector boards (Figures 12.28 and 12.30) in Lighting Sources The hardest photography in the
world to do is day exteriors—you
have a hard side and a soft side and yokes with brakes so they can be set and have so much less control How to
will stay as positioned. The sun moves quickly, however, and it is almost impose myself on daylight rather
always necessary to shake them up before each take. For this reason, a grip than it impose itself on me? In a
night scene, you do it all With day-
has to be stationed beside each board to re-aim it for each take. It is also light, it’s mostly a matter of angles
important to table them if there is a break in flming. This means adjust and choosing good locations in
the refector to a horizontal position so it doesn’t catch the wind and blow terms of light and shadow
over. Be sure to secure them heavily with sandbags. Between scenes, they Michael Chapman
should be laid on the ground on their sides so as not to damage the sur- (Taxi Driver, Raging Bull)
faces. Even the soft side of a refector board can be a bit harsh; one good
strategy is to aim the hard side through medium difusion (like 216) or the
soft side through light difusion (such as Opal), which just smooths it out
a bit.
lighting..287.
Figure 13.56. An example of garage SILKS AND DIFFUSION
door light The subject is just inside the
door, which puts her in open shade The Another choice is to make the sunlight softer and less contrasty. For tight
sun is still hitting the ground, buildings, shots, a 4×4 frame with difusion can soften the light and can be held by
and the rest of the background behind a grip stand, with plenty of sandbags, of course. For larger shots, frames
the camera, which results in a soft
bounce on the actress with silk or difusion are made in many sizes: 6’×6’, 8’×8’, 12’×12’, 20×20’,
and even 20’×40’. These larger sizes require solid rigging and should only
be done if you have an adequate grip crew who know what they are doing:
a 12’×12’ silk has enough area to drive a sailboat at 10 knots, meaning it
can really do some damage if it gets away from you in the wind (See Fig-
ures 12.30 and 12.31).
Silking a scene can have its limitations as it can constrict the angles and
camera movement in a shot. One option is to silk only the actors in the
scene, possibly by “Hollywooding” (handholding) a silk. The disadvan-
tage is that it may cause an imbalance in the exposure of foreground and
background. We’ll talk more about this in Controlling Light and Gripology.
OPEN SHADE AND GARAGE DOOR LIGHT
The simplest and often most beautiful solution to working with harsh
I think lighting is the primary direct sun (Figure 13.54) is simply to get out of the sun entirely. If the
metaphor that works in film
We light the set in a way that director is fexible about the scene, it is usually not only faster but also
supposedly is a correlative for better lighting to move the scene to a shady spot; best of all is open shade,
a state of a character, or the which is the shady side of a building, trees, and so on, but open to the sky
nature of the scene, or the mood (Figure 13.55). Here the subject is lit by the soft light of the radiating sky
We are recreating a realistic space dome, refection of the rest of the terrain, and so on. The only danger here
and trying to make something opti- is your background: since the exposure will be a couple of stops down, it
cally real even if it’s not naturalistic is critical that you not frame the shot so that the hot background behind
We’re trying to create the illusion
that time is advancing: it’s early the actor will be in direct sun and thus severely overexposed. A variation
morning, late afternoon, midnight on this is garage door light (Figure 13.56). It can be both beautiful and dra-
This infuses the story with some matic. It doesn’t have to be an actual garage door, of course; the key is that
kind of meaning it is open shade with a darker background such as you would have with an
Robert Elswit actor positioned right in a large open entrance such as a garage door. Also,
(NIghtcrawler, a good deal of the light on the actor is being bounced of the surrounding
There Will Be Blood) landscape and also the ground in front of them, which gives them a nice
underlit fll.
288..cinematography:.theory.and.practice.
SUN AS BACKLIGHT Figure 13.57. Golden hour (sunset)
lighting on Cool Hand Luke, photo-
If all other options are unavailable, an alternative is to turn the shot so graphed by the great Conrad Hall
that the actor has their back to the sun. This does two things: frst of all
the actor is lit by the bounce of the surroundings. In most cases this is
not quite enough, but the addition of a simple bounce board (foamcore,
beadboard, or a folding silk refector) helps. This involves working with
the director to adjust the shot. Remember that shots rarely exist on their
own; they are usually part of an entire scene. This means that thinking it
through and planning for the sun angles must be done before starting to
shoot the scene. Once you have shot the master for a scene, it is often not
possible to cheat the actors around to take advantage of the sun’s posi-
tion, although for some close-ups it may be possible. It is also important to
think ahead about the sun’s movement, especially if the scene is going to
take a long time to shoot or there is a lunch break during flming.
The frst time you work with the sun as a principal part of your lighting
scheme, you may well be surprised at how fast it moves through the sky
and thus changes the lighting on your actors or the scene overall. This calls
for careful planning and scheduling and referring to one of the many com-
puter or smartphone apps that can help you predict the location and angle
of the sun in advance but will also give you accurate data on the exact time
of sunset or sunrise.
MAGIC HOUR
There is a special condition of lighting that deserves mention—Magic Hour
is the time immediately after sunset when the light of the sky matches
the existing street lights, signs, and windows of the exterior scene (Figure
13.58). It can be extraordinarily beautiful and flm companies frequently
plan their most special shots to be done in this time frame. The catch is that
“magic hour” is nowhere near an hour; it’s more like twenty to thirty min-
utes and can vary according to local conditions (such as mountains) and
heavy cloud cover. The Terrence Malick flm Days of Heaven is famous for
having been shot almost entirely at magic hour. While the cinematogra-
phy by Nestor Almendros is justifably regarded as a masterpiece, few fea-
ture flm productions can aford to limit shooting to only two narrow time
slots each day—morning and evening, although they do schedule special
shots for that time frame. In planning shots that depend on time of day,
the Assistant Director is your key resource—he or she is in charge of the
schedule, and they usually understand the concerns of the cinematogra-
pher in this regard. Commercials, on the other hand, will often spend most
of a day preparing for just that special shot. Golden Hour (Figure 13.57) is
the last few minutes before the sun goes down.
lighting..289.
Figure 13.58. A magic hour shot from Once a magic hour shot has been planned, it’s all about preparation. Not
La La Land only is the time frame extremely short, but the lighting conditions change
minute by minute. If you are only doing available light, then you’ll need
to check your exposure before every shot. If, on the other hand, you are
adding some foreground lights on an actor, there’s another problem; as the
sky darkens and exposure drops, the lights you have added will quickly
become too hot—out of balance with the scene. Standard procedure by
the gafer is to have the crew standing by to adjust the intensity of the
lights quickly and efciently. This can be done by adding scrims or nets in
front of the light and possibly by dimming.
If the director is trying for more than a couple of takes of one shot, or
even to do two or three diferent shots, then it’s going to be a mad scramble
and a high adrenaline situation, but when you pull it of successfully, it’s
high fves all around. Always be aware that a sudden change in cloud cover
can ruin the best planned shots. If clouds roll in suddenly, the exposure
may drop to a level that makes proper shooting impossible. Beware of
frantic directors who insist on shooting anyway, sometimes by declaring
that “I don’t care how it looks.” They may say so at the time, but when
they are unsatisfed with the dailies, it is you they will blame. It is very
unlikely that they will remember that it was them who insisted on shoot-
ing under bad conditions. Contingency plans and setting up extra early
for these types of shots are what prevents disasters and what really sets real
professionals apart from the amateurs.
THE SAME SCENE LIT SEVEN DIFFERENT WAYS
These are frames from a video you will fnd on the website—a scene lit
with seven diferent lighting techniques. This is not a complete list of
methods, but it represents most of the main ways you can approach the
lighting of a scene. This was a classroom exercise; all the variations were lit
and shot within a four hour period. Figures 13.59 through 13.66.
290..cinematography:.theory.and.practice.
Figure 13.59. (left, top) The classroom
exercise lit with classic hard light/back
cross keys Note that the 2K Baby Junior
outside the window was actually a little
farther away than is shown in the dia-
gram The lace in front of her key light
projects a very subtle texture on her
face Putting things in front of a light
like this is quite common; for exam-
ple DP Wally Pfster (The Dark Knight,
Batman Begins, The Prestige, Inception)
says that he nearly always puts some-
thing in front of the key for texture
lighting..291.
Figure 13.60. A typical soft light setup
is essentially the same as the hard light
scheme, except that in this case the
lights are both bounced of foamcore
and have 4x4 difusion frames The 9
Lite FAY outside the window would
actually be farther away than is shown
in the diagram As you see the setup in
the top photo, this would work only for
the coverage, you would need to pull
the units back for a wide shot It is not
unusual for the lighting of the wide shot
to be slightly diferent from the close-
ups as long as it has the same general
feel
292..cinematography:.theory.and.practice.
Figure 13.61. The only light on the
actors is what is bouncing of the table
A 9-light FAY outside the window is
aimed down at the desk For the close-
ups a bounce card was placed on the
desk A 24x36 fag kept any direct light
of the actors The Venetian blinds in
the scene are very helpful, they can be
mostly closed for the wide shot, then
opened up for the close-ups
lighting..293.
Figure 13.62. “Godfather lighting” is
based on the idea frst used by Gordon
Willis in the flm The Godfather Whether
or not you want to add an eyelight is an
artistic choice for the DP and director
The overhead soft was hand built by the
students: a simple 1x6 wooden frame
with 12 porcelain sockets for ECA (250
watt) bulbs, covered with 216 difusion
294..cinematography:.theory.and.practice.
Figure 13.63. Practicals (actual work-
ing lamps) ofer many advantages—
they are very natural and realistic and
you can do a lot with actor movement
For an example of this see the video
“Seven Ways to Light a Scene” on the
website that accompanies this book If
the scene consists of more than just the
actors sitting at the desk (for example,
an entrance or exit) they might need to
be supplemented with other lights or
additional practicals Don’t imagine that
any of these lighting plans need to be
“pure ” Combining methods and supple-
menting is always an option
lighting..295.
Figure 13.64. The setup mimics low
practicals or perhaps moonlight bounc-
ing of the foor Moonlight would never
actually generate this much light, but
this might be a situation where you’re
not so concerned about “motivating the
light ” For the classroom exercise, this is
the lighting we ended up shooting as it
it ft the mood of the scene
296..cinematography:.theory.and.practice.
Figure 13.65. (left) This might be
appropriate if the script calls for some-
thing like an electrical blackout or a
zombie apocalypse For the male actor,
we have added a homemade box light
run through a ficker box to simulate
the efect of candles With a ficker box,
be careful not to overdo it—in reality,
candles don’t ficker unless there is air
movement
Figure 13.66. (above) A typical ficker
box
lighting..297.
Figure 13.66. In this plan, two 1K Baby
Fresnels are aimed straight down on the
actors In reality, you would rarely use a
scheme like this on a dialog scene For
the male actor in particular, the eyelight
is important
298..cinematography:.theory.and.practice.
14
controlling light
Figure 14.1. A soft egg-crate controls HARD LIGHT AND SOFT LIGHT
the light on an antique car (Photo cour-
tesy DOPChoice) The most fundamental and consistently used way in which we control our
lighting is by making it varying degrees of hard or soft. As we discussed in
the last chapter, hard light is the natural output of any small source, such
as a Fresnel light. As the source gets bigger, the light get softer. For this
reason, an LED light panel is softer, but not a great deal softer. Since creat-
ing light sources that are many feet across is impractical in most cases, the
most frequently used methods are to bounce the light of a larger (usually
white) surface, sometimes a white ceiling or wall can make a convenient
bounce.
The other method is to aim the light through difusion; we’ll look at sev-
eral examples of this. Some people think that just using heavier difusion
makes a light soft. Not strictly accurate—heavier difusion just allows less
of the direct (hard) light through. The most fundamental rule when using
difusion is that unless you have the light fll up the whole surface, you are
not getting the full efect of softness. You can always make a hard light soft
but you can’t make a soft light hard, not without a great deal of difculty.
Also, if you take a soft light farther away, it gets harder, because the rela-
tive size of the source relative to the subject gets smaller.
FILL FOR DAY EXTERIORS
You can use bounce boards or lights to fll in the shadows and reduce the
contrast. Grip refector boards (Figures 14.7 and 14.8) have a hard side and
a soft side and yokes with brakes so they can be set and will stay as posi-
tioned. The sun moves quickly, however, and it is almost always necessary
to shake them up before each take. For this reason, a grip has to be stationed
beside each board to re-aim it for each take. It is also important to table
them if there is a break in flming. This means adjusting the refector to a
horizontal position so it doesn’t catch the wind and blow over. Between
scenes, they should be laid on the ground on their sides so as not to damage
the surfaces. Even the soft side of a refector board can be a bit harsh; one
good strategy is to aim the hard side through medium difusion (like 216)
or the soft side through light difusion (such as Opal), which smooths it out
a bit. Foamcore and beadboard are also used. Foamcore, having a smooth
surface is very slightly harder than beadboard which has a pebbly surface.
300..cinematography:.theory.and.practice.
SILKS AND DIFFUSION
Another choice is to make the sunlight softer and less contrasty. For tight
shots, a 4×4 frame with difusion can soften the light and can be held by
a grip stand, with plenty of sandbags, of course. For larger shots, frames
with silk or difusion are made in many sizes: 6’×6’, 8’×8’, 12’×12’, 20’×20’,
and even 20’×40’. These larger sizes require solid rigging and should only
be done if you have an adequate grip crew who know what they are doing:
a 12’×12’ silk has enough area to drive a sailboat at 10 knots, meaning it
can really do some damage if it gets away from you in the wind (Figures
14.7 and 14.8).
Silking a scene can have its limitations as it can constrict the angles and
camera movement in a shot. One option is to silk only the actors in the
scene, possibly by “Hollywooding” (hand-holding) a silk. The disadvan-
tage is that it may cause an imbalance in the exposure of foreground and
background.
SCRIMS AND BARNDOORS
All Fresnels and most open face lights have a few common elements used
to control them. Barndoors are one of these; they come in both two-leaf
and four-leaf versions. In most cases they ft into ears that are part of the
front of the light (Figure 14.3). Scrims are another form of control: they
reduce the total amount of light without altering the quality of light
(Figure 14.2). Many people think they are difusion; they aren’t, however Figure 14.2. (left) Set of scrims and their
scrim bag Shown here is a “Hollywood”
in some cases you can faintly see the pattern of the metal screen. Scrims scrim set, which includes two doubles
are color coded: red for a double (reduces the amount of light by half which in addition to a single, a half-double,
and a half-single
is a full stop) and green for a single, which reduces the light by one-half
stop. A standard scrim set includes a double, a single, a half-double, and a Figure 14.3. (above, top) A typical set of
half-single. A “Hollywood” scrim set has two doubles. four-leaf barndoors
Scrims should be kept in the scrim bag or box when not in use, not Figure 14.4. (above, middle) A single
dropped on the foor. Scrims and barndoors should always be with the grip net, a double, a silk, and solid fag
light. Bringing a light to the set without scrims and barndoors is like wear- These are 18”×24”
ing a t-shirt that says “I’m an amateur.” Figure 14.5. (above, bottom) Various
sizes and shapes of fags and cutters
FLAGS, SOLIDS, AND NETS
Flags are fre resistant duvetyne—black cloth on metal frames that are used
to block of unwanted light, either spill from a light, blocking a window,
or shading an area so a monitor can be more easily viewed. Nets are the
same type of frames but covered with a textile scrim material, which is a
netting with very small openings. A single net has one layer of this material
and reduces the amount of light by 1/2 stop; a double net has two layers
of the same material and reduces the light by one full stop. A lavender is
much lighter and only very slightly reduces the light. Both nets and fags
are available from very small ones a few inches across to 4×4s mounted on
solid frames and 6×6 up to 20’×20’ and even 30’×30’ feet to be mounted
on breakdown frames on the set. The larger ones are called overheads. Nets
up to 24”×36” might be fully enclosed by the metal frame and may have
one side open, so that there is no shadow cast onto the scene by that edge
(Figure 14.4).
controlling.light..301.
Figure 14.6. HMIs bounced into large CHIMERAS AND SNOOTS
muslins create a soft light for this scene
(Photo courtesy Westcott, Inc ) A snoot is anything that goes on the front of a light to contain the spill of
the beams. Depending on how long and narrow it is, it can be more or less
confning. A long and narrow snoot will shape the light to a fairly small
area. Since it is on the front of a 2K soft, the light it produces will be soft
but selective, a popular type of lighting.
SOFTBOXES
Shooting a light through a large difusion frame is not difcult; however, it
I like to say that lighting is about often requires fags on both sides and perhaps above and below. Softboxes
taking the light away I often like (Figures 14.1 and 14.7), such as the ones made by Chimera and Photofex
to use the shadows more than the
light are self-fagging. Already enclosed, they can convert almost any type of
Vilmos Zsigmond
light into an instant soft source. Many of them also have double panels:
(The Deer Hunter, Deliverance, a difusion on the face of the light but also a second difusion inside—in
McCabe and Mrs. Miller) some cases, these can be changed to heavier or lighter difusion. Always be
sure to open the ventilation panels, especially the ones on bottom and top.
Also, be careful you are using a softbox designed for hot lights (tungsten
or HMI)—some softboxes are only for still photo strobes, which don’t get
as hot.
EGGCRATES
Eggcrates are much as the name implies: divided into small openings to con-
trol the spill of the light (Figures 14.1, 14.12, and 14.3). While it doesn’t
confne the light as much as a snoot does, the advantage is that it works
on a soft light. There are both hard and soft eggcrates. The collapsible soft
variety is made in a wide range of sizes up to very large ones.
302..cinematography:.theory.and.practice.
THE WINDOW PROBLEM Figure 14.7. (top) An HMI PAR through
a Chimera, a beadboard bounce from
Windows (and glass doors) present a dilemma—the exterior is almost underneath, an LED for fll, and a 4x4 silk
always much hotter than the interior. With an actor posed against the to control the sun as backlight—espe-
window there is a problem to solve. You can try to pump up the interior cially important for the white uniform
(Photo courtesy Calabria Lighting and
lighting to match the outdoors but this is often not practical. Grip)
Figures 14.14 through 14.15 show one solution to the problem: con- Figure 14.8. (above) This day exte-
vince the director to change the shot so that the actor is illuminated by the rior shot of a vehicle uses two 12x12
window light, not fghting it. bounces, a small bounce board, and
a 12K HMI for a hard backlight (Photo
courtesy DP Graham Futeras)
ND FOR WINDOWS
Another solution is to reduce the light coming through the window. There
are several ways to do this—sometimes double and triple nets outside can
help, but more often neutral density gel is used (Figure 14.21). ND comes
in .3, .6, .9, and 1.2 densities—one, two, three, and four stops reduction.
It also comes in combination with CTO and half-CTO so that it is not
controlling.light..303.
Figure 14.9. Large eggcrates cover the
triangular soft light above the table in
this setup for The Waterhorse, lit by cin-
ematographer Oliver Stapleton Also
rigged on the grid are Tweenies with
the barndoors squeezed down to give
the faces more directional light (Photo
courtesy Oliver Stapleton, BSC)
necessary to use two separate gels to get both exposure and color balance.
Standard practice is to tape or staple the gels to the outside of the window
or to spray a mist of water on the window and squeegee the gel on. It
needs to be tight so it doesn’t fap in the wind and cause audio problems. It
is a good idea to build up layers of ND. For example, if you will need ND
.9 at the brightest part of the day, use a layer of .6 and a layer of .3—this
way you can easily peel of a layer if there are clouds or the sun is setting.
COOKIES, CELOS, AND GOBOS
Cookie is short for cuculoris, any device that is set in front of a light to create
a shadow pattern (Figure 14.14). In most commercially available cookies,
the pattern is soft-edged and random, but many times a specifc shape is
needed (such as a church window, for example) and it is possible make one
on the set with foamcore, which is easily cut.
A soft cookie is partially transparent so the shadows it casts are very subtle.
It is called a celo, as it is made from celo fex material, which is window
screen with a plastic coating that is burned in places to create the more
transparent areas (Figure 14.18).
A gobo is technically anything that goes between a light and the sub-
ject, but cookies are rarely called by this name. Gobo usually refers to a
hard metal matt that goes inside a leko or other focusing light to create a
well-defned and specifc pattern (Figure 14.19). Gobo are available in a
wide range of patterns: windows, trees, the moon, stars, leaves, and so on.
They require a gobo holder and a leko that has a slot for it; you need to
specify this in your lighting order.
304..cinematography:.theory.and.practice.
DIMMERS Figure 14.10. (top) The beginnings of a
large setup—PARcans are directed into
Dimmers come in two basic forms: individual dimmers and systems con- angled refectors to provide a soft over-
trolled by a central dimmer control board. Older dimmers either used head ambient (Photo courtesy Oliver
Stapleton, BSC)
variable resistance or are variable autotransformers (know generically by the
brand name Variac). An ordinary transformer consisting of one coil which Figure 14.11. (above) A balloon light
is shared by the primary and the secondary side of the circuit is known as and two large silks provide soft ambi-
ent with no hard refections, which is
an autotransformer—it converts voltage up or down, such as 120 volts to important for car shots (Photo courtesy
230 volts. Sourcemaker)
In a variable autotransformer the ratio of the primary to the secondary
windings is variable, which means that the ratio of the primary voltage to
the secondary voltage is variable. Electronic dimmers perform the same
function but are far lighter and can be controlled remotely. Individual
controlling.light..305.
Figure 14.12. (top) A soft eggcrate dimmer packs can be used to control a single light or a group of lights on
Snapgrid by DoPChoice controls spill the same circuit but more frequently, dimmer packs are grouped together
from the soft source In addition, Kino
Flos on a pipe secured with wall spread- in a single box and controlled by a DMX dimmer board. It is a common
ers (Photo courtesy DoPChoice) practice to run an entire show on dimmers, which is far quicker and more
Figure 14.13. (above) A Chimera efcient than having electricians run around dropping scrims into lights,
softbox with an eggcrate, modifed by especially when they are far away or up in the air. Unfortunately, some
a single net (Photo courtesy Calabria types of lights (such as HMIs) cannot be dimmed in this way, although
Lighting and Grip)
modern units frequently have their own built-in dimmers.
306..cinematography:.theory.and.practice.
Figure 14.14. (left, top) The window
problem—If you expose for the back-
ground, the subject is badly underex-
posed
Figure 14.15. (left, middle) If you
expose for the subject, the background
is blown out so badly that it fares the
lens
Figure 14.16. (left, bottom) The sim-
plest solution is to reposition the sub-
ject and change the camera angle
Figure 14.17. (above, top) A cuculoris
(cookie) for breaking up the light and
casting a random shadow
Figure 14.18. (above, middle) A celo
(soft cookie) for adding a more subtle
texture to the light
Figure 14.19. (above, bottom) A gobo
for casting sharp edged shadows when
used in a leko
controlling.light..307.
Figure 14.20. (above) A 4x4 cookie cre-
ates a light pattern through the smoke
Figure 14.21. (right) A grip applying
Rosco ND9 to a window
308..cinematography:.theory.and.practice.
LED DIMMERS Figure 14.22. Running large sets with
dozens or even hundreds of dimmer
LED lights cannot usually be controlled by ordinary AC dimmers, but spe- circuits requires some method to keep
cial dimmers are made for them (Figure 14.24). Nearly all professional LED track of what is what Gafer Tony Nako
made this diagram for a scene in X-Men
lights have built-in dimmers, most of which can be controlled remotely by It enables him to quickly communi-
DMX. Most LED units of motion picture lighting also have controls for cate to the dimmer board operator
what adjustments he needs to make
color, from the simplest bi-color controls which can change the light from (Diagram courtesy of Tony “Nako”
daylight to tungsten balance, and degrees in between. The high-end units Nakonechnyj)
have an extraordinary degree of color control, including replicating the
color of a wide variety of color gels.
controlling.light..309.
HAND SQUEEZERS
A particularly useful type of dimmer is called a hand squeezer (Figure 14.23).
It is an ordinary household lighting dimmer, usually 600 watts or 1000
watts, that is often homemade with male Edison plug on the input side and
a female Edison on the output side. It is used for controlling practical lights
310..cinematography:.theory.and.practice.
Figure 14.23. (above, top) A commer-
cially made 2K hand squeezer (top) and a
homemade 1K hand squeezer (bottom)
Figure 14.24. (above) An LED dimmer
and handmade LED source used to pro-
duce a light in the hands of the actor
Figure 14.25. (left, top) The gafer using
a hand squeezer to adjust the brightness
of a practical lamp
Figure 14.26. (right, middle) If the lamp
is turned up bright enough to get some
exposure on the actress, it burns out
annoyingly on screen
Figure 14.27. (left, bottom) With a
Tweenie carrying the lamp, the practical
is dimmed down to a more appropriate
level She is no longer being lit by the
practical lamp, but it looks like she is
Figure 14.28. (opposite page, top) A
fre efects rig—three lights of diferent
sizes through half-CTO for warming, all
controlled by a three circuit ficker box
Figure 14.29. (opposite page, bottom)
A fre efects rig on X-Men by Chief
Lighting Technician Tony Nako using
Muzzballs (Photo courtesy of Tony
“Nako” Nakonechnyj)
on the set and occasional other uses. The ordinary 600 watt wall dimmers
you can buy at the hardware store do work, but don’t expect them to last
very long, especially if you plug something in that draws more than 600
watts. If want to build something that lasts, go for a 1000 watt dimmer.
2000 watt dimmers are available but they are not cheap.
controlling.light..311.
312..cinematography:.theory.and.practice.
Figures 14.30. (opposite page, top) No
difusion (Photo courtesy Lee Filters)
Figure 14.31. (opposite page, middle)
Hampshire frost is an extremely light
difusion (Photo courtesy Lee Filters)
Figure 14.32. (opposite page, bottom)
Hollywood Frost is an extremely light
difusion (Photo courtesy Lee Filters)
Figures 14.33 through 14.36. (this
page) The basic white difusions by Lee
Filters Since all difusion gels reduce
the amount of light to a greater or lesser
extent, all of these photos will appear to
be slightly underexposed The specifc
amount of light loss is shown in the
lower right hand corner (Photos cour-
tesy Lee Filters)
controlling.light..313.
Figures 14.37 through 14.40. The
Hampshire Frost series of Lee Filters
(Photos courtesy Lee Filters)
314..cinematography:.theory.and.practice.
Figures 14.41 through 14.44. Exam-
ples of the Opal Frost and Grid Cloth
series of Lee Filters (Photos courtesy
Lee Filters)
controlling.light..315.
Figures 14.45 through 14.48. Some
samples of various difusions from Lee
Filters (Photos courtesy Lee Filters)
316..cinematography:.theory.and.practice.
Figures 14.49 through 14.52. Some
samples of various difusions from Lee
Filters (Photos courtesy Lee Filters)
controlling.light..317.
Figures 14.53 through 14.56. Frost and
Quiet Frost difusions The quiet difu-
sions are far less likely to create noise
if there is wind on the set—something
the sound department will appreciate
(Photos courtesy Lee Filters)
318..cinematography:.theory.and.practice.
15
gripology
Figure 15.1. This standup interview Grip work is an important part of the flmmaking process. An understand-
shows that lighting a day exterior calls ing of basic grip procedure and equipment is essential for any lighting pro-
for a good deal of work by the grips
(Photo courtesy Nicholas Calabria) fessional. This chapter will cover the tools and equipment of grip work,
standard methods, and general set operating methods.
What is a grip? Grips do many jobs on a flm set: they set up and push the
dolly, they handle cranes and all “rigging,” which would include hanging
lights from a wall, over the side of a clif, on the front of a roller coaster,
or anything that goes beyond simply placing a light on a stand or on the
foor. They also handle any lighting control devices (such as nets, fags, and
silks) that are not attached to the light. Anything that is attached to the
light, such as difusion clipped on with C-47s (also called bullets), is done
by electricians. This is the American system; in most other countries, all
lighting control is done by the electricians—meaning that they need grip
skills and equipment. On the other hand, grips “don’t touch anything with
copper in it,” meaning electrical cable, lights, dimmers, and so on.
DEFINITIONS
Baby means a 5/8-inch stud on a light or pin on a piece of grip equip-
ment or female receiver. Junior and Senior refer to 1 1/8-inch stud or female
receiver. Other standard sizes are 1/4, 3/8, and 1/2-inch. These plus the
5/8-inch are all found on most standard grip heads.
LIGHT CONTROLS
Grip efects units modify the character or quantity of light—blocking,
redirecting, feathering, and wrapping the light—in a word, control. The
basic rule of thumb is that when the control is attached to a light, it is an
electric department deal; if it is separate from the light (on a grip stand, for
example) it becomes a grip thing and, for the most part, diferent equip-
ment is used to deal with it.
320..cinematography:.theory.and.practice.
Figure 15.2. Mole 12 light Maxi-Brutes
and some single MolePARs being read-
ied for a night shoot The Maxis are on
candle sticks, which are usually rigged
by the grip team—they have the equip-
ment for the job For rigs like this you
need to think about the cable runs and
how they will be afected by raising
and lowering the crane—the electri-
cians (lighting technicians) handle this
(Photo courtesy Greg Bishop)
REFLECTORS
Refector boards or shiny boards are ancient and venerable flm equipment.
On “B westerns” of the 1940s and 1950s they were practically the only
piece of lighting equipment used, those being the times of day-for-night,
a now largely abandoned technique. Traditionally, refector boards are
roughly 42×42-inch and are two-sided, a hard side and a soft side.
The hard side is mirror-like, smooth fnish highly refective material. The
soft side is refective material with just a bit of texture. This makes it a bit
less specular and slightly softer. As a result the soft side has considerably
less punch and throw than the hard side, which is sometimes referred to as
a bullet for its ability to throw a hard, specular beam of refected sunlight
amazing distances.
The soft side is important for balancing daylight exteriors. Because the
sun is so specular, daylight exteriors are extremely high-contrast. The
problem is that the sun is so intense, most flm lights are insignifcant in
relation to it. Only an arc, 6K, or 12K means much against direct sun.
The answer is to use the sun to balance itself. This is what makes refector
boards so useful: the intensity of the refection rises and falls in direct pro-
portion to the key source.
A variety of surfaces are available for refector boards. The hardest is
actual glass mirror, which provides tremendous punch and control, but of
course is delicate to handle. Easier to transport but less regular is artifcial
mirror, which can be silvered Plexiglas or silvered mylar (which can be
stretched over a frame).
Hard refector is very smooth and specular and has very high refectivity.
The soft side is usually silver leaf, which is not as smooth and has the leafy Electricians make the light, grips
edges that hang loose. Boards are also available in gold for matching warm make the shadows
light situations such as sunsets. Beadboard is just large sheets of styrofoam.
Beadboard is a soft refector because of the uneven surface, but it is not
rigid and seldom stays fat for very long. This is not a problem for the
soft side, but it makes it very difcult to focus and maintain an even cov-
erage on the hard side. Beadboard is a valuable bounce material, but keep
in mind that a single sheet of beadboard is the equivalent of thousands of
styrofoam cups. Rosco and Lee also make supersoft and other refector
materials available in rolls. They can be stretched on frames and made into
6×6s, 12×12s, etc. A good trick is to take a cheap glass mirror that is per-
manently mounted to a backing board (the kind of inexpensive full-length
mirror available at a dime store, for example) and smash it with a hammer
in several places. With a semi-soft light, it provides a dappled bounce with
nice breakup. With a hard light, it makes a sparkly refection that can sim-
ulate refection of buildings.
gripology..321.
Figure 15.3. A grip standing by to shake OPERATING REFLECTORS
up (refocus) a refector Because the sun The key rule in relation to operating a refector is of course the old phys-
moves so rapidly, it is usually necessary
to shake them up before almost every ics rule: angle of incidence = angle of refection (Figure 15.3). This is the
take problem with refectors. Because the sun is constantly moving, the boards
require constant adjustment. For fast-moving work, it is necessary to sta-
tion a grip or electrician at each board (one person can handle a couple of
boards if they are together). Before each take, the command shake ‘em up
means to wiggle the boards to check that they are still focused where they
should be. Refector boards are equipped with a brake in the yoke to keep
them properly focused. Other rules for using refectors are:
Keep refectors tabled (parallel to the ground) when not in use. This
keeps them from blinding people on the set and makes them less suscep-
tible to being knocked over by the wind. It also serves as a reminder that
they need refocusing.
• If it’s windy at all, lay the boards on the ground when not being
attended by a crew member.
• The refection of direct sun is very intense. Try to keep it out of
people’s eyes.
• Boards must be stabilized during the take. A moving refection is
very noticeable and obviously artifcial.
Standard procedure for aiming a board is to tilt it down until the refec-
tion of the sun is just in front of you. Once you have found the spot, it
If I were ever stranded on a desert
island there would be three things is easier to aim it where you want it to go. Specialized stands known as
I’d need: food, shelter, and a grip refector stands are used for shiny boards. Also called combo stands, they
George C Scott feature junior receivers, leg braces, and no wheels. They are called combo
stands because they are also used as light stands.
322..cinematography:.theory.and.practice.
FLAGS AND CUTTERS Figure 15.4. Grips Hollywood (hand
hold) a quarter-stop silk to cover a walk
Flags and cutters are basic subtractive lighting. Coming in sizes from Since the actors are walking and the
12×18-inches to 24×72-inches, they are used to cast shadows, control spill, dolly moving, they can’t set it in high-
and shape the light. They are constructed with 3/8-inch wire and termi- boys One of the grips rides the dolly
for ease of movement A full silk, which
nate in a 3/8-inch pin that fts in grip heads. Flags come in the standard is much heavier difusion, would have
sizes 12×18, 18×24, 24×36, and 48×48-inch (four by four). been too much; the actors would have
been signifcantly darker than their
background and would have needed
FLAG TRICKS additional lighting, which would have
The most basic rule about fags is that the farther away the fag is from the been difcult on a walking shot (a walk-
and-talk) and even more difcult with-
light source, the harder the cut will be, that is, the sharper the shadow will out shutting down the entire street—
be. This is why fags are such an important adjunct to barndoors. Being which is much more difcult to get a
attached to the light, barndoors can’t be moved away from the light and permit for
there is a limit to how sharp a cut they can produce. The fag can be moved
in as far as the frame will allow, making the cut very adjustable.
For the sharpest cut of all, remove the Fresnel, usually by opening the
door of the light. The more fooded the light, the sharper the cut. For
razor sharp control of light (and for harder shadows) the light must be at
full food. The more spotted a light is, the softer the shadows will be, and
the harder it is to control with barndoors and fags. Cutters are longer and
narrower. Sizes include 10×42, 18×48, and 24×72-inch.
Floppy fags have two layers of duvetyne (the black cloth covering mate-
rial); one of them is attached only on one side and can be “fopped down”
to make a 4×4 into a 4×8, for example. Just position the spring clip so that
it doesn’t cast an unwanted shadow (Figure 15.9). Except in rare circum-
stances, never mount a fag directly to the grip head on a C-stand. Always
include at least a short arm so that the fag can be repositioned. As with any
piece of equipment you set, always think ahead to the next step—that may
be where you want it now, but what happens if it needs to move. Don’t get
yourself too locked in. Leave some slack. Common terms are sider, topper,
top chop, bottomer, and bottom chop, all self-explanatory.
gripology..323.
NETS
Nets are similar to fags, but instead of opaque black duvetyne (which is
fre resistant), they are covered with bobbinet, a net material that reduces
the amount of light without altering its quality (Figure 15.5).
Nets come in two predominant favors, single and double, (there is a
third, called a lavender, rare but it reduces the light 1/4 stop). Double nets
are a double layer of bobbinet. Each layer is rotated 90° from the layer
underneath it. Nets are color coded in the bindings that secure the scrim to
the frame. Singles are white, doubles are red, and silk is color coded gold.
Net material comes in black, white, and lavender, but black is predomi-
nately used.
A single net reduces light by a 1/2 stop, a double by 1 stop, and a triple,
1-1/2 stops. A lavender reduces by 1/4 stop or less. The values are, of
course, approximate—the actual reduction depends on the light sources,
Figure 15.5. (top) Set up for a day exte-
rior single (close-up or medium shot) its distance from the net, and the angle of the net material relative to the
(Photo courtesy Calabria Lighting and direction of the light. The main diference between a fag frame and a net
Grip) frame is that net frames are open-ended: that is, they have one side open
Figure 15.6. (above) A 4x4 cart holds so that it is possible to orient the net so that there is no shadow of the bar
4x4 open frames “skinned up” with vari- where it might show.
ous gels and difusion Extra rolls of dif-
fusion are stored on the side
NET TRICKS
If you tilt a net so that it is at an angle to the light, it becomes “thicker” and
cuts the light even more. This is an easy way to fne-tune the attenuation.
As with a fag, the farther away the net is from the light source, the harder
the cut will be. To set a single source so that the f/stop stays constant over
a long distance (to cover a walk, for example), nets can be stacked, over-
lapped so that the area nearest the light is covered by a single and a double,
then a single only, then nothing.
324..cinematography:.theory.and.practice.
Figure 15.7. (top) Two highboys and a
piece of Speed Rail are used to rig a goal
post to support a MolePAR This type of
rigging would be done by the grip crew
in collaboration with the electricians
Figure 15.8. (above) A detail of the goal
post—pipe clamps hold the Speed Rail
(aluminum pipe)
Figure 15.9. (left) 4x4 solids and pieces
of duvetyne (fre resistant black cloth)
being used to tent a window This blocks
out all extraneous light while leaving
room for a light to be aimed through
the window (Photo courtesy Casey
Schmidt)
To fne-tune very small areas, paper tape can be applied directly to a net.
When the gafer calls for a single or double net on a light, don’t just bring
the one he asks for, bring them both, it might save a trip. This goes for just
about anything that you ofer to a light: difusion, scrims, etc. Very small
fags and nets are called dots and fngers (Figure 15.12).
CUCULORIS (COOKIES)
Cuculoris or cookies as they are commonly called, are random patterns
designed to break up the light either subtly (if close to the light) or in a
hard shadow pattern (if away from the light). Standard cookies come in
two varieties: wood, made from 1/4-inch plywood and celo, which is a
layer of wire net covered with plastic-like material which has a random
pattern burned into it to create a varying pattern of transparency and
semi-opacity. Celos are much more subtle than a wood cookie.
gripology..325.
Any fat device designed to cast a patterned shadow is a cookie. Leaf
patterns, blinds, and stained-glass windows are common cookie patterns.
Cookies of this type are also known as gobos. Foamcore is easily cut, but
rigid enough to be self-supporting. When actual branches and leaves are
used as cookies they are called dingles. A small branch is a 1K dingle and
larger ones are 2K or 5K dingles; another name is branch-aloris. If they
Figure 15.10. (above, top) A highboy actually appear in the frame they are called smilex or lettuce.
(hi-hi roller stand) has a 4-1/2” grip head
which in this case holds a grip arm The As with fags, the farther away from the light, the sharper the cut. If
arm is doubled to get it out further a hard, sharp shadow pattern is needed, be sure to employ a light large
Figure 15.11. (above, middle) A detail enough to back way of and still get the stop. For maximum sharpness,
of the doubled grip arm rig take the lens of the light. This reduces the radiator to a point source which
Figure 15.12. (above, bottom) A Mat- casts the sharpest possible pattern.
thews dots and fngers kit: very small
fags and nets, some of them round GRIDS AND EGGCRATES
(dots) The use of grids came from still photographers, who used them to make
Figure 15.13. (right) The right hand rule non-directional light directional. Honey-comb grids have very small
As you face the stand, the handles are openings and are one to two inches thick. They function in the same way
on the right
as an eggcrate on a soft light.
OPEN FRAMES
All size frames also come uncovered so that the grips can put on the dif-
fusion of the day. A couple of open frames in each size are a must in any
grip order. Paper tape can be used to apply gel to a frame, but double-stick
tape is also useful. Snot tape is the more professional solution, it is double
sided transfer tape in a dispenser, is a very quick and efcient way to gel up
a frame, and is the best way to accomplish the job.
326..cinematography:.theory.and.practice.
DIFFUSERS Figure 15.14. (above, left) Everything
about this C-stand setup is wrong The
Also known as silks, difusers come in all the standard fag sizes and are cov- right hand rule is not observed, the
ered with a white silk-like difusion material. Originally, it was real silk, arm is set at eye level in a way some-
and still is if you order China silk. More often, the silk is made of white one could poke someone’s eye out,
the weight (of the fag and arm) is not
nylon, which is easier to clean and doesn’t turn yellow as readily. Silks over the high leg, there is no sandbag,
soften light and reduce it by about 2 stops. There is also a much lighter and they have started by raising it not
on the top riser You can see how easily
difusion 1/4 stop silk that reduces the light by about 1/3 stop—nobody this stand would tip over If you see a
seems to know why this is. C-stand set like this—just say no!
Figure 15.15. (above, right) A properly
BUTTERFLIES AND OVERHEADS done C-stand The right hand rule is
Permanently mounted silks are seldom larger than the standard 4×4 foot. observed, the arm is set safely out of the
way, the weight is over the high leg, and
Six-by-sixes are called butterfies. They come in breakdown frames which a sandbag is set also on the high leg
are assembled on site and the covering is stretched and secured with ties.
Larger silks are called overheads and come in the standard sizes 8×8, 12×12,
and 20×20-foot, commonly called 8 by, 12 by, and 20 by. The frames are
transported disassembled with the covering of. Also available are 9×12
and 20×40 and 30×30 and even 40×40. Butterfies (6×6-foot) can be fown
on one stand (usually a highboy). Large frames are fown on two highboys.
The critical factor in fying a 20-by is knowing that it is sufcient area
to drive a sailboat at 12 knots; it can be powerful! Use extreme caution
when a large silk is up. Don’t fy a silk without an adequate size crew
of knowledgeable grips and the proper ancillary supplies—lots of sand-
bags, tie-down points, and plenty of sash cord. Loose silks with no frames
can be very large and are often stretched over backlot streets. On Memoirs
of a Geisha, loose silks were fown over a set that covered several acres—
important because the quality of sky light in California is very diferent
from that in Japan (the setting of the picture). With all large overheads,
ordering it complete means that it includes the silk, a solid (black duvetyne),
a single net, and a double net, which gives you a wide variety of controls
on the same frame.
gripology..327.
Figure 15.16. A grip truck on location OVERHEAD RULES
Carts for C-stands and 4x4s stand ready
A four step ladder makes it easier to get Never leave silks unattended when they are up in the air. One grip per
on and of the truck via the lift gate stand is required at all times and two or three more when moving it in any-
Refector stands are stored on the door thing but dead calm conditions. These things can do some damage if they
get away from you. Bag liberally! Run your lines out at as shallow an angle
as you can and secure properly—a line running straight down will do very
little to secure the overhead. If possible, leave the top riser of the highboy
unextended and leave it loose so it can take out any torsion that builds up
in the system. Standard technique for the lines is to take a 100-foot hank
of sash cord and fnd the center which is then secured to one of the top
corners. The excess line is then coiled and left to hang far enough down
so that it can be reached even if the frame is vertical. Tying of to several
heavy sandbags can make readjustment easier than tying of to something
permanent. It is then possible to just drag the bags a bit, rather than having
to adjust the knot. Attach the rag to the frame using a shoelace knot. Set
the frame up on apple boxes when attaching or removing the rag to keep
it of the ground. The best idea is to learn how to do this from an experi-
enced grip crew.
GRIFF
One more type of covering that can go on overhead frames is grifolyn, a
reinforced plastic material. Its advantage is that the white side is extremely
refective and the whole unit is highly durable and waterproof.
Grif is versatile. Its most frequent use is as a large refector, either for sun
or for a large HMI as a bounce fll. A 12×12-foot or 20×20 foot white grif
with a 6K, 12K, or 18K bounced into it is also a popular combination for
night exteriors. It’s soft enough to appear sourceless, but refective enough
to provide a good stop. A grif on a frame can also be used by turning the
black side to the subject as a solid cutter or as negative fll. If it rains the
grif is waterproof so it can be an excellent rain tent. With the white side
down it even provides a bit of sky ambiance.
HOLDING
One of the most essential aspects of grip work is getting things to stay
where they are supposed to, even under stressful conditions. A big part
of the fun of grip work is facing new and original challenges, like “secure
328..cinematography:.theory.and.practice.
this $200,000 dollar camera to the front of this truck and make sure it can Figure 15.17. Tips for Grips (Drawing
withstand an explosion.” A good deal of grip equipment is designed for by Brent Hirn, Master of Gripology)
exactly this purpose. However, keep in mind that more than one grip’s
famous last words were “that ain’t goin’ nowhere.”
gripology..329.
GRIP HEADS AND C-STANDS
Much of grip hardware is just for that purpose—gripping stuf. A good
deal of the grip’s job is getting things to stay where you want them. The
grip head is one of the most valuable grip’s repertoire for getting just about
anything to stay just about anywhere. The grip box is full of things that
are designed only to hold something somewhere, frmly but adjustably.
The grip head or gobo head is one of the most important inventions of the
twentieth century (Figure 15.22). Versatile, powerful, and stable, the grip
head has been called upon to perform a mind-boggling array of tasks. It is
a connector with two pressure plates and holes for various size pins. It can
also accept slotted fat plates (such as on the ears of an overhead mount)
and foamcore, show card, plywood, etc. In place, it can hold a grip arm, a
5/8” stud for a light, a baby plate, a small tree branch, armature wire, pen-
cils, and just about anything else you can imagine. A gobo arm is the same
device but permanently mounted on a steel rod, which is 40 inches stan-
dard (Figures 15.14 and 15.15). Short arms are usually around 20 inches and
are an important part of any order. A short arm with a grip head attached
is known as a short arm assembly.
C-STAND RULES
Figure 15.18. (top) Pipe clamps secure As any grip will tell you, one of the most basic rules of gripology is the
Arri LED Fresnels to an aluminum truss, right hand rule (Figure 15.13). The rule is, always set a grip head so that
usually called a rock n’ roll truss
gravity wants to tighten it. Remember: “righty tighty, lefty loosey.” It’s
Figure 15.19. (above) A set cart for simple when you think about it. Grip heads are friction devices. By tight-
C-stands and 4x4 nets and fags
ening the screw in a clockwise direction you are putting pressure on the
arm (or whatever you are holding) and friction is what is both holding
the arm in, and preventing the pressure from releasing. If the item being
held extends out from the stand, it exerts rotational force in the direction
of gravity. If this rotational force turns the screw in a counterclockwise
direction, it will release the pressure. Make sure that the force of the load
is trying to turn the head in the same direction that you did to tighten it.
330..cinematography:.theory.and.practice.
Figure 15.20. (top) Hive plasma lights
mounted on Speed Rail with pipe
HIGHBOYS clamps The short pieces of Speed Rail
are mounted on the truss with Crosby
The highboy, also called a “high-high stand” is a roller stand used for hold- clamps Speed Rail is 1-1/4” aluminum
ing large silks, making goal posts and rigging all manner of large items. tube
It can extend very high, usually about 18’ which makes them useful for Figure 15.21. (above, middle) Beach
getting lights high enough to shoot through second foor windows. Most tires make it possible to roll this 12K HMI
have a larger 4-1/2” grip head. on sand (Photo courtesy Greg Bishop)
Figure 15.22. (above) A standard
CLAMPS grip head (Courtesy Matthews Studio
A big part of a grip’s job is making things stay where they should. C-stands, Equipment)
camera mounts, large rigs, sometimes lights if they are mounted on some- Figure 15.23. (left) A Matthews junior
thing other than a light stand or the grid. For example, if a $250,000 grid clamp
camera is secured to the front of a roller coaster that sustains 5Gs, the rig-
ging has to be solid. Clips and clamps are a big part of securing whatever
you don’t want moving.
gripology..331.
332..cinematography:.theory.and.practice.
Figure 15.24. (above) A park scene
using a mirror refector, 4x4 silks, and a
6x6 grifolyn in the foreground (Photo
courtesy Casey Schmidt)
Figure 15.25. (left) A right-angle Car-
dellini clamp attached to a fence rail
Figure 15.26. (opposite page, top) A
Speed Rail rig creates mounting points
for lights, a 4x4 silk, and a China ball
(Photo courtesy Casey Schmidt)
Figure 15.27. (opposite page, bottom)
Another view of the Speed Rail rig hold-
ing a 6x6 silk with some duvetyne skirts
to limit spill onto the rest of the scene
(Photo courtesy Casey Schmidt)
STUDDED C-CLAMPS
Studded C-clamps come in a range of sizes from 4 to 10 inches and with
either baby (5/8”) studs or junior (1 1/8”) receivers (Figure 15.31). They
are used for rigging lights, grips heads, tie of points, and other items to
beams, trees, and anything else they can be ftted on. They come in vari-
ous sizes so be sure to order some with small openings, some medium and
some with a wider “reach” for clamping on to larger things
gripology..333.
Figure 15.28. (top) A 20x20 silk rigged C-CLAMP TIPS
to a crane—this permits quick mobility
and eliminates the need for highboy A properly mounted studded C-clamp can hold a substantial load securely.
stands that might limit the camera It is not good at resisting the turning load perpendicular to its axis, so don’t
angles for the scene
try to make it do that. A C-clamp will defnitely leave marks on just about
Figure 15.29. (above) Another view of anything. Standard procedure is to card it: put a small piece of show card
the crane supporting the 20x silk and a
6K HMI punching through an 8x8 difu- underneath the feet to protect the surface. Always put a safety line around
sion frame
the item being mounted and the C-clamp itself.
Choose the appropriate size C-clamp. A clamp that is too large for the
job will have to be extended too far. When this happens the threaded bar
can wobble and twist because it is too far from its support. This is an unsafe
condition and should be avoided.
If you are clamping to a pipe you must use a C-clamp that has channel
iron mounted to it (U-shaped steel feet). A fat C-clamp will not safely
grab a pipe. If available, a pipe clamp is often the better choice for this.
334..cinematography:.theory.and.practice.
BAR CLAMPS
A variation on the studded C-clamp is the bar clamp, which is a furniture
clamp with a 1K stud attached to it. Bar clamps can be extremely wide (up
to several feet) and so can ft around items that would be impossible for a
C-clamp.
PIPE CLAMPS
Pipe clamps are specialty items for mounting directly to pipes and grids.
They get their most frequent use in studio work. If you are likely to be
working in a studio or an industrial situation with a pipe grid, pipe clamps
can free lights from stands.
CARDELLINI AND MAFER CLAMPS
Pronounced may-fer, these clamps are indispensable: don’t leave home
without them. They can bite anything up to the size of a 2×4, and have
interchangeable studs that can do anything from rigging a baby spot to
holding a glass plate. A Cardellini is similar but a bit more versatile and
has tremendous holding power (Figure 15.34). They have dozens of other
uses:
• Grips often clamp one onto the end of the dolly track so the dolly
won’t run of the end.
• With the baby stud clamped into a grip head, they can hold up a
6×6-foot frame.
• With a baby plate mounted in it, the mafer can be attached to pipe
or board to hold a small refector card—the uses are endless.
Figure 15.30. (top) A soft panel rigged
QUACKER CLAMP as an overhead bounce is held by Car-
dellini clamps (Photo courtesy Mat-
Also known as a duck bill, platypus or more prosaically known as a bead- thews)
board holder; it’s a studded vise grip with 5×6-inch plates for grabbing
beadboard and foamcore with minimal breakage (Figure 15.32). Figure 15.31. (middle) A studded
C-clamp with two baby spuds (5/8th
inches) (Photo courtesy Matthews)
WALL PLATES, BABY PLATES, AND PIGEONS
Although technically diferent things, the names are used interchange- Figure 15.32. (above) A Quacker (Duck-
bill) clamp for holding foamcore, bead-
ably. They are all basically baby studs with a fat plate attached. It can board, etc (Photo courtesy Matthews)
also be done with a junior (1 1/8”) receiver. The plate always has screw
gripology..335.
Figure 15.33. (above) A pipe clamp
secures a Kino Flo LED Celeb to Speed
Rail
Figure 15.34. (right) Types of Cardellini
Clamps (Courtesy Cardellini Products)
Figure 15.35. (opposite page, top) Rig-
ging for a shoot in Venice ofers some
real challenges—outside many win-
dows there are only canals A barge and
crane was the solution for DP Oliver
Stapleton (Photo courtesy Oliver Sta-
pleton, BSC)
Figure 15.36. (opposite page, bottom)
Another ingenious foating rig by Oliver
Stapleton and his team (Photo courtesy
Oliver Stapleton, BSC)
holes. With a pigeon, some drywall screws, and a portable drill, a grip can
attach small lights, grip arms, cutters, refectors, and props to any wood
surface. A particularly popular combination is a baby plate mounted on a
half apple, quarter apple or pancake—it is called a pigeon (Figure 15.44).
This is the standard low mounting position for small lights. When you
need even lower, a pancake (the slimmest form of apple box) should be
used, although the depth of the screws is limited. When a half apple is
not enough, other apple boxes can be placed beneath for some degree of
adjustability.
2K RECEIVERS AND TURTLES
While a pigeon is generally a baby plate, a similar rig can be made with
a 2K receiver (Figure 15.41). A variation on the 2K pigeon is the turtle,
which is a three-legged low mount for any 2K studded light. The advan-
tage of the turtle is that it is self-supporting with a sandbag or two. The
T-bone is similar. It is a “T” made of two fat steel bars with a 2K receiver.
It will mount the light lower than a turtle and can also be screwed to a
wall. T-bones all have safety chains attached. The chains are constantly in
336..cinematography:.theory.and.practice.
gripology..337.
Figure 15.37. (above, top) A large rig for a moving bus scene The space lights pro-
vide overall ambient and PARs aimed into the angled bounces are programmed to
go on and of sequentially to simulate light in a moving vehicle (Photo courtesy
Oliver Stapleton, BSC)
Figure 15.38. (above) Rigging to the grid in a studio is a job for both grips and elec-
tricians Grips rig the lights and electrics run power to them and make adjustments
(Photo courtesy TFlicker)
338..cinematography:.theory.and.practice.
the way and always seem to get under the T-bone or apple box when you Figure 15.39. A simple and elegant
solution to providing overall ambient
try to get it to sit fat. They have no conceivable application, and yet they light for a night interior: China balls are
are permanently mounted. We can all hope that manufacturers will wake rigged to Speed Rail which is suspended
by wall spreaders (Photo courtesy Bryan
up and make T-bones without the chains. Several manufacturers now Hong)
make C-stands with a detachable base. When the vertical riser is removed
the legs are an independent unit with a junior hole. This makes an instant
turtle. The riser can be clamped to a fre escape or mounted in other hard
to get to places as an extendable mount.
SIDE ARMS AND OFFSET ARMS
Side arms (both 1K and 2K varieties) serve two functions: they arm the
light out a bit for that little extra stretch. Because they can clamp onto the
stand at any level, they are useful for mounting a light low. Ofset arms are
similar but ft into the top of a baby or junior stand, not on the side.
OTHER GRIP GEAR
Because the duties of the grip team are so diverse, the equipment they use
covers a wide range. Many items are part of a “standard” grip package and
some need to be ordered for the production.
gripology..339.
SANDBAGS
Sandbags are absolutely essential for flm production. Most often used
to weigh down light stands, C-stands, and highboys to make them more
safe, they have multiple other uses from holding doors open to helping an
actor sit higher in a chair. Typically, they are not flled with sand but with
kitty litter. Sandbags come in various sizes. The most commonly used is
25 pounds; 15 pound and 5 pound bags are also used. Shotbags are more
compact but they are flled with lead shot so they weigh 25 pounds. They
are useful for their ability to get into places where a 25 lb sandbag won’t ft.
Figure 15.40. (top) A massive rigging APPLE BOXES
job using rock and roll trusses and
Speed Rail to support dozens of Arri The term comes from the early days of Hollywood, when they used actual
Skypanels (Photo courtesy Arri Light- apple boxes. Now they are specially made items that are sturdily built.
ing)
Simple boxes in four sizes, they are used for an amazing variety of uses
Figure 15.41. (above) A 2K turtle base (Figures 15.45 and 15.46). The sizes are full apple, half apple, quarter
This one is actually the bottom of a apple, and pancake.
C+ stand Others are designed only
as bases for lights or equipment (such
as wind machines) that have a 1-1/8” WEDGES
(junior) stud Wedges are simple wooden wedges used primarily for leveling; most often
leveling dolly track. A box of wedges is a normal accessory for any work
with dolly track. Smaller ones, called “camera wedges” are used for small
jobs around the camera and dolly.
CANDLE STICKS
Candle sticks are basically the upper part of a baby or junior light stand,
without the legs that extend out for stability. They are most often used to
secure lights in a crane—usually with chain vise grips which do an excel-
lent job of keeping them in place, even with the kind of heavy lights that
are typically rigged in cranes (Figure 15.43).
340..cinematography:.theory.and.practice.
Figure 15.42. (left, top) A Mafer (left)
and a Cardellini clamp, both with baby
spuds
Figure 15.43. (left, bottom) Mole Maxi-
Brutes on candle sticks secured to a
crane basket
Figure 15.44. (above) A pigeon—baby
plate on a pancake
gripology..341.
Figure 15.45. (top) Full apple, half
apple, quarter apple, and on top, a pan-
cake
Figure 15.46. (above) Three full apples
in the standard positions (from left
to right): Number three (upright—
full height); number 2 (on the side—
medium height); and number 1: laid
down—lowest height
Figure 15.47. (right, top) Crowder
clamps on Speed Rail to support these
Arri lights
Figure 15.48. (right) A studded chain
vise grip
342..cinematography:.theory.and.practice.
16
camera movement
Figure 16.1. An Alexa mounted on CAMERA MOVEMENT IN FILMMAKING
a dolly in low-mode This allows the
camera to go lower than mounting Moving the camera is much more than just going from one framing to
directly on the dolly head (Photo cour- another. The movement itself, the style, the trajectory, the pacing, and
tesy Sean Sweeney)
the timing in relation to the action all contribute to the mood and feel of
the shot. They add a subtext and an emotional content independent of the
subject.
Here we can talk about the techniques and technology of moving the
camera. The most critical decision about the use of the camera is where
you put it, as we saw in Language of the Lens. Camera placement is a key
decision in storytelling. More than just “where it looks good,” it deter-
mines what the audience sees and from what perspective they see it. As
discussed in the chapter Shooting Methods, what the audience does not see
can be as important as what they do see.
Since Grifth freed the camera from its stationary singular point-of-
view, moving the camera has become an ever increasing part of the visual
art of flmmaking. In this section, we will look at the dynamics of camera
movement and also take a look at some representative ways in which this is
accomplished. The dolly as a means of moving the camera dates from the
early part of the 20th century. The crane came into its own in the 1920s
for a modern version (Figures 16.2 and 16.3). Shots from moving vehi-
cles were accomplished in the earliest of silents, especially with the silent
comedians, who didn’t hesitate to strap a camera to a car or train. After the
introduction of the crane, little changed with the means of camera move-
ment until the invention of the Steadicam by Garrett Brown (Figure 16.6).
It was frst used on the flms Bound for Glory and Kubrick’s The Shining.
344..cinematography:.theory.and.practice.
CAMERA OPERATING Figure 16.2 A crew prepares the
Operating the camera is not as easy as it looks. On the surface, it seems camera for a crane shot
like you just point the camera where it needs to be. In reality, operating
demands quick reaction, physical coordination, a sense of framing, and
what makes a powerful shot. All of this done on the fy, in real time, often
with unpredictable movements by the actors, props, vehicles or even back-
ground subjects.
MOTIVATED VS. UNMOTIVATED MOVEMENT
A key concept of camera movement is that it should be motivated—the
movement should not just be for the sake of moving the camera. Motiva-
tion can come in two ways. First, the action itself may motivate a move. If
the character gets up from a chair and crosses to the window, it is perfectly
logical for the camera to move with her.
Both the start and the end of a dolly move or pan should be motivated.
The motivation at the end may be as simple as the fact that we have arrived
at the new frame, but clearly it must be a new frame—one with new infor-
mation composed in a meaningful way, not just “where the camera ended
up.” A big part of this is that the camera should “settle” at the end of any
move. It needs to “alight” at the new frame and be there for a beat before
the cut point. This is especially important if this shot might cut to a static
shot.
Particularly with the start and end of camera moves that are motivated
by subject movement, there needs to be a sensitivity to the timing of the When you move the camera, or you
subject and also a delicate touch as to speed. You seldom want the dolly to do a shot like the crane down (in
Shawshank Redemption) with them
just “take of ” at full speed, then grind to a sudden halt. Most of the time, standing on the edge of the roof,
you want the dolly grip to “feather” in and out of the move. The camera then it’s got to mean something
movement itself may have a specifc story purpose. For example, a move You’ve got to know why you’re
doing it; it’s got to be for a reason
may reveal new information or a new view of the scene. The camera may within the story, and to further the
move to meet someone or pull back to show a wider shot. Unmotivated story
camera moves or zooms are distracting: they pull the audience out of the Roger Deakins
moment and make them conscious that they are watching a fction; they (Fargo, 1917, The Big Lebowski)
do, however, have their uses, particularly in very stylized flmmaking.
camera.movement..345.
Figure 16.3. Grips rolling a crane onto There are many ways to fnd a motivation for a camera move, and they
tracks on the flm 42 The grip crew can be used to enhance the scene and add a layer of meaning beyond the
does all building, moving, and operat-
ing of cranes in addition to many other shots themselves. They can also add a sense of energy, joy, menace, sad-
tasks Note the use of 2x12 lumber, half ness, or any other emotional overlay. Camera movement is much like the
and quarter apple boxes, and wedges to
form a ramp Also interesting is the use pacing of music. A crane move can “soar” as the music goes upbeat, or the
of heavy timber under the dolly tracks camera can dance with the energy of the moment, such as when Rocky
to reduce the need for leveling gear reaches the top of the museum steps and the Steadicam spins around and
Lengths of lumber are normally carried
on the grip truck for uses like this and around him. Motivating and timing camera moves are part of the goal of
specialized pieces are ordered sepa- invisible technique. Just as with cutting and coverage in the master scene
rately (Photo courtesy Vertical Church method, the goal is for the “tricks” to be unnoticed and not distract from
Films)
the storytelling.
BASIC TECHNIQUE
There is an endless variety of ways to move the camera; it is useful to look
at a few basic categories of types of moves to provide a general vocabulary
of camera dynamics. The most fundamental of camera moves, the pan (left
or right pivot) and tilt (up or down pivot), can be accomplished in almost
any mode, including handheld.
The exception is when a camera is locked of on either a non-movable
mount (as it might be for an explosion or big stunt) or where it is on a
movable head, but the movements are locked down, and there is no opera-
tor. Many types of efect shots require the camera to be locked down so
that not even the slightest movement of the camera is possible. Sandbags
on the tripod or dolly, or even braces made of C-stands or lumber may also
I really enjoy blocking and staging be used. Beyond the simple pan and tilt or zoom, most moves involve an
I think most of visual storytelling actual change of camera position in the shot. Other than handheld, these
is camera placement and how to
stage action around the camera kinds of moves involve specifc technologies and also the support of other
team members: the grip department. Grips are the experts when it comes
Hiro Murai to mounting the camera in any way other than right on a tripod or a dolly,
and they are the people who provide the rigging, the stabilization, and the
actual operation when it comes to performing the actual move. A good
grip crew makes it look easy, but there is considerable knowledge and
fnesse involved in laying smooth dolly track on a rough surface (Figure
16.3) or rigging the camera on the front of a roller coaster. Every detail of
rigging is beyond the scope of this chapter, but we will touch on some of
the major issues.
346..cinematography:.theory.and.practice.
TYPES OF MOVES
The frst use of moving cameras were in the early 1900’s. There are some
fundamental camera moves that date nearly as far back.
PAN
Short for panoramic, the term pan applies to left or right horizontal move-
ment of the camera. Pans are fairly easy to operate with a decent camera
head—which sits atop the tripod or dolly, holds the camera, and permits
left/right, up/down, and sometimes sideways tilting motions (Figure 16.7). Figure 16.4. (above, top) An O’Connor
fuid head, in this case supporting an
There is one operational limitation that must be dealt with. If the camera Arri Amira Camera heads fall into two
is panned too quickly, there will be a strobing efect, which will be very types—hydraulic fuid heads like this
and geared heads as in Figure 16 23
disturbing. As a general rule of thumb, with a shutter opening of 180° (Photo courtesy O’Connor Engineering)
and a frame rate of 24 or 25 FPS, it should take at least 3 to 5 seconds for
an object to move from one side of the frame to the other. Any faster and Figure 16.5. (above) A Sachtler 7x7
Studio Fluid head on a high-hat attached
there is a danger of strobing. to a plywood pancake (shortest member
of the apple box family), Ronford-Baker
TILT legs, a spreader to keep the legs from
collapsing, and the shipping case
The tilt is up or down vertical rotation of the camera without changing
position. Technically, it is not correct to say “pan up,” but as a practi- Figure 16.6. (left) Executing a Steadi-
cal matter almost everybody says it—it’s silly to “correct” a director who cam shot requires more than just the
operator Here the frst AC handles the
says “pan up”—it won’t earn you any brownie points. As we will see later remote focus control and a grip stays
in this chapter, cranes, Steadicams, stabilizer rigs, and aerial mounts are close behind the operator for safety and
guidance, such as if there is a poten-
to a large extent used to break out of the confned horizontal plane and tial tripping hazard coming up—this is
make the scenes more truly three-dimensional. Filmmaking is confned, to also standard practice for all types of
a large degree, by where we can put the camera. Certainly the ability of handheld shots or other stabilizer rigs
(Photo courtesy of Brad Greenspan)
the Steadicam, drones, and similar rigs to move with action up and down
stairs and slopes has opened up a new variety of moves, that help with this
three-dimensional efort and keeps us “with” the characters as they move
through space. Given the technology now available and the ingenuity of
our grips and camera assistants, there is hardly anywhere a camera can’t go.
MOVE IN / MOVE OUT
Common terminology is push-in or pull-out for moving the camera toward
the scene or away from it. For example, instructions to the dolly grip:
“When he sits down, you push in.” This is diferent from a punch-in (see
following). Moving into the scene or out of it are ways of combining the
wide shot of a scene with a specifc tighter shot. It is a way of selecting the
view for the audience in a way that is more dramatic than just cutting from
camera.movement..347.
Track In / Track Out Track Left / Track Right Pan Left / Pan Right
Boom Up / Boom Down Tilt Up / Tilt Down Dutch Left / Dutch Right
wide shot to closer shot. It has the efect of focusing the viewer’s attention
even more efectively than just doing a wide establishing and then cutting
to the scene; by moving in toward the scene, the camera is saying “of all
the things on this street, this is the part that is important to look at.” Of
course, there are infnite uses of the simple move in/move out. We may
pull back as the character moves out of the scene or as another character
enters; the move is often just a pragmatic way of allowing more room for
additional elements of the scene or to tie in something else to the imme-
diate action we have been watching. Conversely, when someone leaves a
scene, a subtle push-in can take up the slack in the framing.
ZOOM
A zoom in or out is an optical change of focal length. It changes the fram-
ing without moving the camera. Visible zooms are not popular in fea-
ture flm making—certainly not since the end of the 1970s. The reason is
simple: a zoom calls attention to itself and makes the audience aware they
are watching a movie—something we usually want to avoid in invisible
technique. When a zoom is used, it is important that the zoom be motivated.
Also, it is best to hide a zoom. Hiding a zoom is an art—the zoom may be
combined with a slight lateral camera move, a dolly move, a slight pan, or
with a move by the actors so that it is unnoticeable.
Figure 16.7. (right) The six simple
camera moves
DIFFERENCE BETWEEN A ZOOM AND A DOLLY SHOT
Figure 16.8. (top) An ordinary track Say you want to go from a wide or medium to a close-up during the
with move Camera move matches the shot. On the face of it, there would seem to be no real diference between
direction of the subject
moving the dolly in or zooming in. In actual efect, they are quite difer-
Figure 16.9. (middle) Dolly across the ent, for several reasons. First, a zoom changes the perspective from a wide
line of movement This one has to be angle with deep focus and inclusion of the background to a long-lens shot
used with caution If the entire shot is
not used, the screen direction will be with compressed background and very little of the background. It also
fipped without explanation See the changes the depth-of-feld, so the background or foreground might go
chapter Continuity for further discus-
sion of this issue from sharp focus to soft. These might be the efects you want, but often
they are not. Second, the dolly move is dynamic in a way that a zoom
Figure 16.10. (bottom) Example of a cannot be. With a zoom your basic point-of-view stays the same because
countermove
the camera does not move; with a dolly the camera moves in relation to
the subject. Even if the subject stays center frame, the background moves
behind the subject. This adds a sense of motion, and also the shot ends
with an entirely diferent background than it opened with. This is not to
say that a zoom is never desirable, just that it is important to understand
the diference and what each type of move can do for your scene as a visual
efect. Many people will also “hide” the zoom by making some other type
of move at the same time so that the zoom is not noticeable.
A very dramatic efect can be produced with a combination of zoom and
a dolly. In this technique you zoom out as you dolly in. This keeps the
image size relatively the same, but there is a dramatic change of perspec-
tive and background. This was used very efectively in Jaws, when Roy
348..cinematography:.theory.and.practice.
Scheider as the sherif is sitting on the beach and frst hears someone call Figures 16.11, 16.12, and 16.13. (left,
“shark” (Figure 17.22 in Optics & Focus). It was also used efectively in center, and right) A going to meet them
camera move This is a very dynamic
Goodfellas in the scene where Ray Liotta is having lunch with Robert De shot as the subject distance changes
Niro in the diner. At the moment Liotta realizes he is being set up for kill- This can be used to start with a wide
tracking shot and end up with a tight
ing by his old friend, the combination move efectively underscores the close-up or vice versa
feeling of disorientation.
PUNCH-IN
Diferent from a push-in, which involves actually moving the camera, a
punch-in means that the camera stays where it is, but a longer focal length
prime is put on or the lens is zoomed in for a tighter shot. The most
common use of a punch-in is for coverage on a dialog scene, usually when
going from an over-the-shoulder to a clean single. Since moving the
camera forward from an over-the-shoulder may involve repositioning the
of-camera actor and other logistics, it is often easier to just go to a longer
lens. There is some slight change in perspective, but for this type of close-
up, it is often not noticeable as long as the head size remains constant.
MOVING SHOTS
Moving shots happen in all kinds of ways. As cameras have become smaller
and lighter and new inventive camera supports have developed, there are
few if any limits on how the camera can move. When shooting with vehi-
cles, they are referred to as rolling shots.
TRACKING
The simplest and most clearly motivated of camera moves is to track along
with a character or vehicle in the same direction (Figure 16.8). For the
most part, the movement is alongside and parallel. It is certainly possible
to stay ahead of and look back at the subject or to follow along behind, Figure 16.14. An example of a complex
but these kinds of shots are not nearly as dynamic as tracking alongside, and dynamic move: a shot of this type
which gives greater emphasis to the moving background and the sweep of tracks, pans, and subject distance and
direction change all at the same time
the motion. Moves like this are just as easily done
with Steadicam or handheld
COUNTERMOVE
If the camera always moves only with the subject, the camera is “tied
to” the subject and completely dependent on it. If the camera sometimes
moves independently of the subject, it can add a counterpoint and an addi-
tional element to the scene (Figure 16.14). Certainly it can be dynamic
and energetic; it adds a counterpoint of movement that deepens the scene.
Whenever the camera moves in the opposite direction, the background
appears to move at twice the rate it would move if the camera was track-
ing along with the subject. A variation is to move across the line of travel,
as in Figure 16.9. A variation of the countermove is where the dolly moves
in the opposite direction and the subjects cross the axis of motion as in
Figure 16.10.
REVEAL WITH MOVEMENT
A simple dolly or crane move can be used for an efective reveal. A subject
flls the frame, and then with a move, something else is revealed. This type
of shot is most efective where the second frame reveals new content that
amplifes the meaning of the frst shot or ironically comments on it.
camera.movement..349.
Figure 16.15. A heavy lift drone carries CIRCLE TRACK MOVES
a Red Dragon in a three axis stabilizer
mount for remote control pans and tilts When ordering a dolly and track, it is quite common to also order at least a
(Photo courtesy of Alpine-Aerials) couple of pieces of circle track. Circular track generally comes in two types:
45° and 90°. These designate whether it takes four pieces or eight pieces
to make a complete circle, which defnes the radius of the track. Some
companies specify by the radius of the circle. A very specifc use of circle
track is to dolly completely or halfway around the subject; this type of
move is easily overused and can be very self-conscious if not motivated by
something in the scene.
One important note on setting up a circle track scene: as it is quite
common to use circle track to move very slowly around the subject in a
tight shot, focus pulling can get quite complex. The best way to simplify
a circle move is to set up the shot so that the subject is positioned at dead
center of the radius of the track.
CRANE MOVES
The most useful aspect of a crane is its ability to achieve large vertical
moves within the shot. While a crane may be used only to get the camera
up high, a typical variety of crane shot is to start with a high-angle view
of the overall scene as an establishing shot and then move down and in
to isolate a piece of the geography: most often our main characters, who
then proceed with the action or dialog. This is most often used to open
the scene by combining the establishing shot with the closer-in master of
the specifc action.
The opposite move, starting tight on the scene and then pulling back
to reveal the wide shots, is an efective way to end a scene as well and is
often used as the dramatic last shot of an entire flm—a slow disclosure.
Depending on the content of the scene and the entire flm, it can have
a powerful emotional content. The ability of the crane to “swoop” dra-
matically and fowingly can be used for exhilarating and energetic efect;
more than any other type of camera move, it can really “dance” with the
characters or the action.
ROLLING SHOT
The term rolling shot is used wherever the camera is mounted on a vehicle,
either on the picture vehicle or a camera car that travels along with the
picture vehicle. The “picture” vehicle is the one being photographed. See
Figures 16.26 and 16.27.
350..cinematography:.theory.and.practice.
Figure 16.16. (above) A leveling head is
essential for any camera mount With-
out one, there is no way of making sure
the camera is perfectly level, which is
critically important to check for every
camera setup This one has a fat Mitch-
ell base on top for mounting the camera
CAMERA SUPPORTS FOR MOVEMENT head
What moves we can make is dependent on what type of equipment is sup-
Figure 16.17. (above, top) A true nodal
porting the camera. In the early days of flm, they had only the tripod. point head See the chapter on Optics
Today, we have a huge variety of potential camera supports. & Focus for why a nodal point head is
required for some types of shots This
one is a Lambda head by Cartoni—it is
DRONES also a typical underslung head (Photo
courtesy of Cartoni, S p A)
Drones have become an important tool in flmmaking (Figure 16.15). They
are the result of two trends: much more powerful and controllable mini- Figure 16.18. (left, top) A Movi sta-
bilizer rig in use by DP John Brawley
helicopters and far lighter cameras that can produce professional quality (Photo courtesy John Brawley)
video (HD or higher). The advantages are obvious—aerial shots without
Figure 16.19. (left, below) An Alexa
the expense of a helicopter and pilot, small sizes that can fy into surpris- Amira in handheld mode with a curved
ingly tight spaces, and reasonable control without years of training. shoulder pad mounted to the tripod
screw hole as is typical for handheld
work (Photo courtesy John Brawley)
HANDHELD
Handheld is any time the operator takes the camera in hand, usually held on
the shoulder, but it can be held low to the ground, placed on the knees, or
any other combination (Figures 16.18 and 16.19). For years, handheld was
the primary means of making the camera mobile in cases where a dolly
was not available or not practical. With so many other ways to keep the
camera mobile, it is often used for artistic purposes as it has a sense of
immediacy and energy that cannot be duplicated by other means.
camera.movement..351.
Standard (track) Steer
Front of dolly
Round Crab
CAMERA HEADS
The camera cannot be mounted directly on a dolly. If it was, there would
be no way to pan or tilt the camera. On dollies, cranes, and car mounts,
there is also an intermediate step: the leveling head (Figure 16.16). This is
the base the camera head sits on, which allows for leveling of the camera.
In the case of a tripod, leveling is accomplished by lengthening or shorten-
ing one of the legs to get the camera level but for all other types of sup-
ports some method of leveling in needed—the obvious exceptions being
Steadicams and similar types of supports. Most dollies have a built-in lev-
eling capability, but they often are needed when mounting a camera on
something like a car mount hostess tray (Figure 16.27). Camera heads make
smooth, stable, and repeatable moves possible. Camera heads have two
main types of mounts: the fat Mitchell base (Figure 16.5) and the ball head
(Figure 16.28), which allows for leveling the head quickly. Camera heads
fall into two general categories: fuid heads and geared heads.
FLUID HEAD
These use oil and internal dampers and springs to make extremely smooth
left/right and up/down moves possible (Figure 16.27). The amount of
resistance is adjustable. Most camera operators want the head to have a
good amount of resistance working against them as this makes it easier to
control the speed and timing of a move.
GEARED HEAD
These heads are operated with wheels that the operator can move smoothly
and precisely repeat moves (Figure 16.23). The geared head has a long and
venerable history in studio production. The geared head is useful not only
for the ability to execute smooth and repeatable moves but also because
it can handle very heavy cameras. Geared heads also isolate the operator’s
body movement from camera movement.
camera.movement..353.
Figure 16.25. (right, top) Dolly track
leveling with a half apple box, wedges,
and cribbing (blocks of wood)
Figure 16.26. (right, middle) A hood
mount supports three cameras for a
rolling shot Three cameras means that
the two shot and close-ups on each
actor can be flmed at the same time for
greater efciency in covering the scene
(Photo courtesy of CaryCrane)
Figure 16.27. (right, bottom) These side
mounts (also called hostess trays) are set
up to cover the front seat action from
the driver’s side window and from the
left rear of the front seat (Photo cour-
tesy of CaryCrane)
354..cinematography:.theory.and.practice.
Figure 16.28. (above) Tripod and fuid
head with a ball base (Photo Courtesy
Sachtler GmbH)
Figure 16.29. (left) Christopher Ivins
operates his Steadicam rig The First AC
stays close by in order to judge focus,
which she pulls with a remote control
and a small motor on the lens focus
ring (Photo courtesy Christopher Ivins)
REMOTE HEAD
Geared heads can also be ftted with motors to be operated remotely or by
a computer for motion control (mo-co). Remotely controlled heads are used
for a variety of purposes and have made possible the use of cranes, which
extend much farther and higher than would be possible if the arm had to
be designed to carry the weight of an operator and camera assistant. As
with geared heads, wheels are now also used to control remote heads on
cranes or helicopters, so it is very useful to learn how to operate them—it
takes a good deal of practice to become profcient with them.
UNDERSLUNG HEADS
Underslung rigs are fuid heads, but the camera is not mounted on top; it is
suspended on a cradle below the pivot point. Underslung heads can rotate
vertically far past where an ordinary fuid head can go and thus are good
for shots that need to go straight up or down or even further, as in Figure
16.17.
DUTCH HEAD
Dutch angle is when the camera is tilted of horizontal. The variations are
dutch left and dutch right. As with many obscure terms in flm, there is much
speculation as to the origin. In fact, it goes back to 17th-century England,
when a Dutch royal, William of Orange, was placed on the throne of Brit-
ain. There was resentment, and anything that was seen as “not quite right”
was called “dutch.” Specially built dutch heads are also available that con-
vert back and forth between dutch and normal operation very quickly.
THE TRIPOD
Often called “sticks,” the tripod is the oldest and most basic type of camera
mount but still sees constant use on all types of flm and video sets (Figure
16.28). Being smaller, lighter, and more portable than just about any other
type of mount, its versatility makes up for its shortcomings. It can be
quickly repositioned and can be made to ft into very tight, odd places. Its
main advantage is that it can be transported just about anywhere.
camera.movement..355.
Figure 16.30. (right) A porkchop on a
Fisher dolly The dolly is mounted on a
skateboard sled that rides on the tracks
HIGH-HAT
The high-hat (Figures 16.5 and 16.34) is strictly a mounting surface for
the camera head. It is used when the camera needs to go very low, almost
to the surface. It is also used when the camera needs to be mounted in a
remote place, such as on top of a ladder. The high-hat is usually bolted to
a piece of plywood (a pancake) that can be screwed, clamped, or strapped
to all sorts of places.
ROCKER PLATE
The drawback of a high-hat is that the camera head (fuid or geared) still
has to go on top of it. As a result, the lens height is still at least 18 inches or
more above the surface. If this just isn’t low enough, the frst choice is usu-
ally to prop it on a sandbag. The pliable nature of the sandbag allows the
camera to be positioned for level and tilt. Any moves, however, are pretty
much handheld. If more control is desired, a rocker plate can be used. This
is a simple device that allows the camera to be tilted up and down. Smooth
side-to-side pans are not possible.
TILT PLATE
Sometimes, a shot calls for a greater range of up-and-down tilt than a typi-
cal camera head can provide. In this case, a tilt plate can be mounted on top
of the camera head (Figure 16.23). It is usually geared and can be tilted
to the desired angle. The gearing (if there is any) is generally not smooth
enough to be used in a shot. Some geared heads have a built-in tilt plate.
THE CRAB DOLLY
The crab dolly is by far the most often used method of mounting and moving
the camera. A crab dolly in the hands of a good dolly grip is capable of a
surprising range and fuidity of movement. Figure 16.30 is a typical dolly
widely used in production today.
DOLLY TERMINOLOGY
Special terminology is used to describe dolly motion so that it can be com-
municated precisely. This is especially important when you need to tell
the grip what you need for the shot (Figures 16.21 and 16.22).
DOLLY IN/OUT
Move the dolly toward or away from the subject. When a dolly is on the
foor (i.e., not on track) and you want to move forward, there are two
choices. “Move in” can either mean move forward on the axis of the lens
or on the axis in which the crabbed wheels are aiming. These are “in on
the lens” or “in on the wheels.”
356..cinematography:.theory.and.practice.
DOLLY LEFT/RIGHT
Move the dolly left or right. If the dolly is on tracks, it is left or right in
relation to the axis of the track. If the dolly is on the foor, then it is left or
right in relation to the subject.
BOOM UP/DOWN
Nearly all dollies have a boom: a hydraulic arm capable of moving vertically
in a smooth enough motion to be used in a shot without shakes or jarring.
Some boom terms include top foor and bottom foor (bargain basement).
Figure 16.31. (top) Cranes mounted on
CRAB LEFT/RIGHT automobiles have become a very popu-
Most dollies have wheels that can crab (Figure 16.22), that is, both front lar method for doing running shots, as
in this rig for The Dark Knight The grips
and rear wheels can be turned in the same direction, allowing the dolly to have built a ramp so the vehicle can
move laterally at any angle. For most normal operations, the rear wheels smoothly negotiate the steps
are in crab mode and are the “smart wheels.” The front wheels are locked Figure 16.32. (above) The Cable Cam in
in and function as the dumb wheels. For a true crab move, all four wheels use for a scene on a bridge
are switched to crab mode. There is another variation that can be done
only with certain dollies. This is roundy-round, where the wheels can be set
so that the dolly revolves in a full 360° circle on its own center. To do this,
the front and rear wheels are crabbed in opposite directions.
DANCE FLOOR
Stable, shake-free dolly moves can only be accomplished on smooth foors.
If there is no room for track to be laid or if the director is looking for
dolly moves that can’t be accommodated by straight or curved track, a
dance foor can be built that allows the camera to move anywhere. A dance
foor is built with good quality 3/4 inch plywood (usually birch) topped
with a layer of smooth masonite. It is important that the joints be ofset and
then carefully taped with paper tape. This forms an excellent surface for
smooth moves. The dolly can crab and roll anywhere, and combination
moves can be quite complex. The only drawback is that you often have to
avoid showing the foor. Smooth foors or dance foor becomes especially
critical if anything other than a wide lens is up on the camera because,
with a longer lens, every bump in the foor will jiggle the camera.
EXTENSION PLATE
When the camera is mounted on the dolly, it may be necessary to extend
it to the left, right, or forward of where the dolly can go; Figure 16.35 for
example, if you need to place the dolly at the center of a bed. This can be
done with an extension plate that mounts on the dolly; then the camera
head is mounted at the end.
camera.movement..357.
LOW MODE
Sometimes the camera needs to be lower than the boom can go. In this case,
there are two possibilities. Some dollies can have their camera mounting
arm reconfgured so that it is only a few inches above the foor. If this is
not available or is not enough, a Z-bar can be used to get the camera all the
way to the foor (Figure 16.35). The Z-bar is basically an extension arm
that extends out and then down as close to the foor as possible.
FRONT PORCH
Some dollies have a small extension that fts on the front of the dolly—the
front porch; this is also known as a cowcatcher. This can be used to hold the
battery or as a place for the operator or the camera assistant to stand during
a move. As with all dolly accessories, the grip crew is capable of getting
very creative with how they are used for any type of situation.
SIDE BOARDS
Sideboards ft on either side of the dolly as a place for the operator or camera
assistant to stand. They are removable for transportation and for when the
dolly has to ft through tight spaces. These are especially important for
complex moves that require the operator to shift their body position.
RISERS
Six, 9, 12, or 18-inch risers can place the camera higher than the boom
Figure 16.33. (top) A school bus has a travels. The longest extensions can get the camera very high but at the
back porch rigged to hold the genera- price of complete stability.
tors needed for the shot (Photo cour-
tesy Michael Gallart)
STEERING BAR OR PUSH BAR
Figure 16.34. (middle) A high-hat In This allows the dolly grip to push/pull the dolly and also to steer the dolly
this case, it is a bowl shape for fuid
heads that can be leveled without in standard mode (where only the rear wheels pivot) or in crab mode,
adjusting the tripod, high-hat, or other where both sets of wheels pivot.
mount (Photo courtesy 3D Video Sys-
tems)
CRANES
Figure 16.35. (above) A Fisher dolly on Cranes are capable of much greater vertical and horizontal movements
skateboard wheels with a drop down
mount for low angle shots (Photo cour- than a dolly. There are two types: jib arms have no seat for the cameraper-
tesy of J L Fisher, Inc ) son and are usually operated by someone standing on the foor or perhaps
an apple box. True cranes have seats for the operator and a camera assistant.
358..cinematography:.theory.and.practice.
Large cranes can generally get the camera to 27’ or more above the base. A Figure 16.36. Crane and remote head
typical crane is shown in Figure 16.36. Telescoping cranes, pioneered by mounted on a pickup (Photo courtesy
of LoveHighSpeed)
Technocrane are also capable of making the boom arm longer or shorter as
in Figures 16.3 and 16.39.
Both cranes and jib arms have one fundamental characteristic that may
become a problem. Because they are all mounted on a pivot point, the
arm always has some degree of arc as it moves up, down, or laterally. With
dolly arms, this degree of arc is usually negligible for all except exacting
macro or very tight work that calls for critical focus or a very precise frame
size. For nearly all cranes, there is a pivot point, and behind this point are
counterweights. This is diferent from a dolly, where the boom arm is fxed
and usually operated by hydraulics.
The counterweights extending behind the pivot point have two impor-
tant consequences. First, it is important to take this backswing into account
when planning or setting up a crane move. If there isn’t sufcient room,
at best your moves will be limited and at worst something will be broken.
The second one is an important safety issue.
When you are on a crane, the key grip or crane grip is in charge. Nobody
gets on or of the crane without permission of the crane grip. The reason
for this is that your weight and the camera are precisely counterbalanced
by the weights on the back end. If you were to suddenly get of the crane
without warning, the camera end would go fying up in the air and very
likely cause damage or injury. With anyone getting on or of, or with any
changes in equipment, the crane grip and the back-end grip communicate
loudly and clearly so that every step is coordinated.
Other safety issues when working on a crane: always wear your safety
belt. Always and be extremely careful around any electrical wires. After
helicopters and camera cars, cranes around high-voltage wires are the lead-
ing cause of serious injury and death in the motion picture industry. Take
it seriously. The best bet is to tell your crane grip what you want and then
just let him be in charge.
camera.movement..359.
Figure 16.37. A suicide rig built and CAR SHOTS
operated by Mark Weingartner Kids,
don’t try this at home (Photo courtesy Car shots have always been a big part of flm production. In the old studio
Mark Weingartner) days, they were usually done on sets with rear projection of moving streets
visible through the rear or side windows. Special partial cars called bucks
had the entire front of the car removed for ease of shooting.
Rear or front projection of exterior scenes is rarely used these days,
partly because the technology of shooting on live locations has been per-
fected as well as flm or digital replacement of the background. Car shots
are accomplished with car mounts or a low-boy trailer (Figure 12.9 in Lighting
Sources). The trailer is low enough that shot of the actors doesn’t seem to
be unnaturally high of the ground.
CAMERA POSITIONS FOR CAR SHOTS
The standard positions for car shots are on the hood and either passenger
or driver-side windows. Those at the side widows are accomplished with a
hostess tray (Figure 16.27). The ones on the front are done with a hood mount
(Figure 16.26). These two components are the standard parts of a car rig
kit, but be sure to specify both if you need them. On low-budget produc-
tions where car rigs are not possible, there are some standard tricks. For
shots of the driver, the operator can sit in the passenger seat. For scenes
with two people in the front, the operator can sit in the back seat and do
3/4 back shots of each, two shots of both, and so on. In such cases, exterior
mounted lights for the car are usually not available, so it is common to let
the outside overexpose 1 to 2 stops and leave the interior slightly underex-
posed. It also helps greatly to pick streets with greenery or dark walls on
the side of the street to hold down the overexposure of the exterior.
360..cinematography:.theory.and.practice.
VEHICLE TO VEHICLE SHOOTING Figure 16.38. Steadicam operator San-
tiago Yniguez has the arm mounted on
Camera cars are specialized trucks with very smooth suspension and numer- a two-wheel rig operated by a grip
ous mounting positions for multiple cameras. Camera cars are used in two
basic modes. For close-ups of the actors, the picture car is usually towed by
the camera car or mounted on a low-boy trailer that is towed. Towing has
two advantages. First, the position of the picture car doesn’t change radi-
cally and unpredictably in relation to the cameras, which can be a problem
for the camera operators and the focus pullers. Second, it is much safer
because the actor doesn’t have to perform and try to drive at the same time.
A simpler technology for towing shots is the wheel-mount tow. This is
a small two-wheel trailer that supports only the front wheels of the car.
Because the picture car is still at ground level, there are few problems with
perspective. This can be an advantage if, for example, the car has to stop
and someone approaches the window. This could all be done in one shot,
camera.movement..361.
Figure 16.39. An extending Techno- where it would be difcult if the car is mounted on a full trailer. One safety
crane mounted on a camera car All consideration for front wheel tows: the tires are usually held onto the tow
cranes move up/down and left/right but
some add an additional axis of motion: carriage with straps. Camera positions for vehicle to vehicle usually repeat
in and out The arm can be extended or the standard positions for hood mounts. A crane may also be mounted on
retracted smoothly during the move
Some heads can also be revolved so the camera car, which can be used for very dynamic moves such as starting
that the camera spins horizontally The with the camera shooting through the windshield, then pulling back and
Technocrane was the frst to ofer this up to show the whole car traveling alongside.
movement, but other companies have
made similar devices (Photo courtesy
CaryCrane) AERIAL SHOTS
Aerial shots were also attempted very early in flm history. Vibration has
always been a problem with aerial shots as with the pressure of the wind-
stream. Both make it difcult to get a good stable shot and control the
camera acceptably. The Tyler mounts for helicopters isolate the camera
from vibration and steady it so it can be operated smoothly. Today, most
aerial shots are accomplished with remote head mounts, with the camera
mounted to the exterior of the aircraft and the operator inside using
remote controls, but in tight budget or impromptu situations it is still
sometimes necessary for the camera operator to lean outside and balance
on the pontoon—hopefully with the proper safety rig. In such cases, don’t
forget to secure the camera as well as any flters, matte box, or other items
that might come loose in the slipstream.
OTHER TYPES OF CAMERA MOUNTS
There are some other methods of rigging cameras for use in specialized
situations. As with the other camera support systems we have covered,
the proper equipment needs to be reserved for rental and in many cases,
trained operators need to be booked, which means that what the director
and DP want to do with camera movement has to be well thought out in
preproduction.
STEADICAM
The Steadicam revolutionized camera movement (Figures 16.29 and 16.38).
It can smoothly move the camera in places where a dolly would be imprac-
tical or difcult, such as stairs, rough ground, slopes, and sand. A skilled
operator can pull of amazing shots that can almost be an additional char-
acter in the scene. In standard mode, the flm or video camera is mounted
on top of the central post, and the operator’s video monitor and batteries
362..cinematography:.theory.and.practice.
ride on the sled at the bottom of the rig. The only limitation is that since Figure 16.40. Sliders have become
the post extends down from the camera, that is the lower limit of travel for very popular for their ability to make
small camera moves with a minimum of
the camera. To go any lower than this, the entire rig must be switched to equipment and setup (Photo courtesy
low-mode, which generally takes several minutes. Cinevate)
364..cinematography:.theory.and.practice.
17
optics & focus
Figure 17.1. Shallow depth-of-feld is THE PHYSICAL BASIS OF OPTICS
used as a storytelling device in this shot
from The Handmaid’s Tale Except for certain minor diferences, the principles of optics and the
use of lenses are the same for flm and video. Nearly all principles of
optics and optical design are based on a few properties of physics. The
two most basic are refection and refraction. There are a few things we
need to know about the basic behavior of light in order to understand
the fundamentals of optics.
REFRACTION
The refraction of visible light is an important characteristic of lenses
that allows them to focus a beam of light onto a single point. Refrac-
tion, or bending of the light, occurs as light passes from one medium
to another when there is a diference in the index of refraction between
the two materials. Refractive index is defned as the relative speed at
which light moves through a material with respect to its speed in a
vacuum. When light passes from a less dense medium such as air to
a more dense medium, such as glass, the speed of the wave decreases.
Conversely, when light passes from a more dense medium to a less
dense medium, the speed of the wave increases. The angle of refracted
light is dependent upon both the angle of incidence and the composi-
tion of the material into which it is entering. “Normal” is defned as a
line perpendicular to the boundary between two substances.
FOCAL LENGTH AND ANGLE OF VIEW
The focal length of the lens is the distance between the optical center of
the lens and the image sensor when the subject is in focus; it is usually
stated in millimeters such as 18 mm, 50 mm, or 100 mm (Figure 17.5).
For zoom lenses, both the minimum and maximum focal lengths are
stated, for example, 18–80 mm.
The angle of view is the visible extent of the scene captured by the
image sensor, stated as an angle. Wide angles of view capture greater
areas, small angles smaller areas. Changing the focal length changes
the angle of view. The shorter the focal length (such as 18 mm), the
wider the angle of view and the greater the area seen. The longer the
focal length (100 mm, for example), the smaller the angle and the
366..cinematography:.theory.and.practice.
larger the subject appears to be. The angle-of-view is also afected by the Figure 17.2. A prime lens has only one
size of the sensor or flm format. The smaller the sensor size, the greater focal length, unlike a zoom, which is
variable in focal length This is a stan-
the angle of view will be for the same focal length. The diference between dard set of primes for 35mm flm Of
two formats (sensor sizes) is called the crop factor. Charts and calculators are course, they can be used on any video
camera with the appropriate lens
available online to help you determine what the angle of view will be for mount, but any given focal length will
a particular combination of lens focal length and sensor size. As we will have a diferent angle of view depend-
see, sensor size also afects depth-of-feld—a larger sensor (or flm format) ing on the size of the sensor A lens is
defned by its focal length, and its maxi-
will have less depth-of-feld for the same focal length and aperture setting. mum wide open aperture (Photo cour-
tesy of Schneider Optics)
F/STOP
It is one thing to have the lens form an image on the focal plane, but the
amount of light that reaches it must be controlled. This is done with an
aperture, which is nothing more than a variable size hole that is placed in
the optical axis.
The f/stop of a lens is a measure of its ability to pass light to the image
plane. The f/stop is the ratio of the focal length of a lens to the diameter of
the entrance pupil. However, this is a purely mathematical calculation that
does not account for the varying efciency of diferent lens designs. T-stop
(transmittance stop) is a measurement of actual light transmission measured
on an optical bench. F/stops are used in depth-of-feld and hyperfocal cal-
culations, and T-stops are used in setting exposure. T-stops are typically
1/3 to 1/2 stop less than the F/stop.
When setting the aperture on a lens, never go backward. Most apertures
have a certain amount of backlash that must be compensated for. If it is
necessary to go to a larger stop (open up), open the lens all the way up and
then reset the stop.
FOCUS
Focus is a much misunderstood aspect of flmmaking. What is “in focus”?
Theoretically, it means that the actual object is projected onto the flm or
video “as it appears in real life.” The human eye tends to perceive every-
thing as in focus, but this is a result of the eye/brain interaction. The eye
is an f/2 optic and may be considered a fairly “wide-angle” lens, so much
of the world actually is in focus, certainly in brightly lit situations. But,
nearly imperceptible to us, the focus is constantly shifting. This is accom-
plished by the muscles that control the lens of the eye. They distort its
shape to shift the focus. If you look at something very close in dim light,
the background will be out of focus, but most likely you will not perceive
it—because you are “looking” at the near object. “Looking” means the
brain is focusing your attention. This is what diferentiates the eye from a
camera: our mental focus is a condition of our consciousness—the camera
simply records everything. Figure 17.1 is an example of shallow depth-
of-feld.
optics.&.focus..367.
As we will see later, a great number of the practices of focus—focal
length, composing the frame, and even lighting—are attempts to re-create
this mental aspect of focus and attention. We are using the camera to imi-
tate how the eye and brain work together to tell a visual story in an imita-
tion of how life is perceived by the mind.
First, the technical basics: the taking lens is the optical system that projects
the image onto the flm or video sensor, which is called the image plane. All
imaging, whether photography, cinema, video, or even painting, is the
act of taking a three-dimensional world and rendering it onto this two-
dimensional plane.
When discussing focus, we often tend to think only in terms of the fat
image plane, but it is more useful to remember that the lens is forming
a three-dimensional image in space, not a fat picture plane. It is the fat
picture plane that must be “focused” onto. It is the only part of the image
that gets recorded.
The image plane is also called the Principal Plane of Focus—sort of the
uptown business address for what we commonly call the focal plane. Think
of it this way: we are shooting a scene that has some foreground bushes,
a woman standing in the middle, and some mountains behind her. The
woman is our subject. We focus the lens so that she is sharply projected
onto the image plane.
In our three-dimensional model, the bushes and the mountains are pro-
jected on the lens, but in front of her and behind her. In other words, they
are being projected into the camera, but in front of and behind the Prin-
cipal Plane of Focus. As a result, they are out of focus. By shifting the focus
of the lens, or by stopping down, or using a wider angle lens, we can bring
them into focus, but let’s assume we are shooting wide open with a fairly
long lens. By changing the focus of the lens, what we are actually doing is
shifting that three-dimensional image backward and forward. If we shift
it backward, the mountains are focused on the image plane; if we shift
forward, the bushes are focused. Only objects that are projected sharply
on the image plane are actually in critical focus. But there are many objects
Figure 17.3. (top) A view of the aper-
ture ring set at T/2 1 on a Zeiss Prime that are only slightly in front of or behind the principal subject. If we stop
and a look down the barrel at the iris down a little, thus increasing depth-of-feld, they appear sharp (Figures
wide open 17.9 through 17.12).
Figure 17.4. (above) The aperture ring But they are not actually sharp. This is called apparent focus. What is the
set at T/22 and a look down the barrel boundary line between actual focus and apparent focus? There is none—at
with the iris closed down all the way
least not technically defnable. It is a very subjective call that depends on
many factors: perception, critical judgment, the resolving power of the
lens, the resolving power of the flm or video, the amount of difusion, the
surface qualities of the subject, lighting, and so on. Also very important
is the end use of the footage. Something that appears in focus on a small
television might be horribly soft on an Imax screen. There is a technical
measurement of critical focus that is discussed below. It is called the circle
of confusion. Note also that depth-of-feld is diferent from depth-of-focus, as
in Figure 17.8.
MENTAL FOCUS
The viewing audience will tend to focus their attention on the part of the
image that is in focus. This is an important psychological function that is
valuable in visual imagery and storytelling with a lens.
But cinematographers are engaged not only in shaping mental percep-
tion, they are also technicians. We need some way of quantifying focus,
however arbitrary that might be. Let’s think about a single ray of light—
for example, an infnitely small (or at least a very tiny) point of light that is
the only thing in the feld of view. This sends a single ray of light toward
the lens. As the ray of light leaves the object, it expands outward; no set of
light rays is truly parallel, not even a laser or the light from a distant star.
The lens captures these slightly expanding rays of light and reconcentrates
them: this bends them back toward each other. This forms a cone behind
368..cinematography:.theory.and.practice.
Figure 17.5. Field of view of a standard
set of high-speed prime lenses on a
35mm format camera at a 16x9 aspect
ratio The lower the number of the focal
length, the wider the feld of view of the
lens The camera remains in the same
position for all these examples The
frames are, from top to bottom: 18mm,
25mm, 50mm, and 85mm prime lenses
optics.&.focus..369.
Figure 17.6. Akira Kurosawa used the lens. Where these rays actually meet is where the image is in focus. The
almost exclusively very long focal
length lenses; the compression of space lens can then be adjusted so that this single point of light is sharply focused
they produce is key to this composition on the image plane.
from Seven Samurai
Now, we shift the lens so that the image of the dot of light is not exactly
at the image plane. What happens? The image of the dot gets larger because
we are no longer at the confuence of the rays of light as concentrated by
the lens. If we do this only slightly, no one may notice. We say that this is
still acceptable focus. If we shift a lot, most people would then perceive it
as out of focus. Taking into account the various factors, imaging scientists
have quantifed how much bigger that dot can be and still be in “accept-
able” focus. But who decides what is “acceptable” when it comes into to
focus? Optical scientists have developed a standard we can use to quantify
this aspect of lens performance.
CIRCLE OF CONFUSION
This standard is called the circle of confusion. Circle of confusion is defned
as the largest blurred point of light that will still be perceived as a point by
the human eye. It is a measure of how large the image of a point source can
be before it is unacceptably out of focus.
Theoretically, the point of light projected onto the flm plane should be
the same size as the infnitely small point of light it is seeing, but due to
the nature of optics, it can never be perfect. For flm work in 16mm, the
circle of confusion varies from 1/2000” (.0005”) for critical applications
to 1/1000” (.0001”). For 35mm it ranges from 1/700” (.00014”) to 1/500”
(.002”). The circle of confusion is an important part of defning the depth-
of-feld at a given f/stop; it is part of the calculation. The circle of confu-
sion is most important in the calculation of depth-of-feld. Whenever you
look at a depth-of-feld chart, you will see listed the circle of confusion
used in the calculations.
370..cinematography:.theory.and.practice.
Figure 17.7. (above) A very wide lens
makes extreme deep focus possible in
Depth-of-Field Depth-of-Focus The Good, The Bad, and the Ugly . Sergio
Leone was a master of using lens for sto-
rytelling effect .
The Lens
Figure 17.8. (left) The difference
between depth-of-field (in front of the
lens, the subject) and depth-of-focus
(behind the lens at the film plane) .
DEPTH-OF-FIELD
Back to our model of a three-dimensional projected image (Figure 17.8).
The portion of this image that falls on the image plane and is within the
circle of confusion is called the depth-of-field. It has a near and far limit, but
these fall off gradually. A number of factors affect depth-of-field:
• Focal length of the lens. The shorter the focal length, the greater
the depth-of-field.
• The aperture of the lens. The smaller the aperture, the greater the
depth-of-field.
• Image magnification (object distance). The closer the subject is to
the image plane, the less the depth-of-field.
• The format: larger formats (35mm or Imax) have less depth-of-
field than smaller formats (16mm or most digital sensors).
• The circle of confusion selected for the situation.
• Indirectly—the resolving power of lens and film, end use, diffu-
sion, fog, smoke, the type of subject.
Depth-of-field is not evenly distributed in front of and in back of the plane
of critical focus. It is one-third in front and two-thirds behind, sometimes
more in modern lenses. This is because behind the plane of focus is, of
course, farther away. This may be crucial when composing shots with very
limited depth-of-field. Most camera assistants carry calculators for depth-
of-field and other optical information they need quickly on the set.
hoW noT To geT more depTh-of-field
Due to the principles of physics, wide-angle lenses will have more depth-
of-field at a given f/stop. We must dispel one of the most persistent myths
of filmmaking. Many people still believe that if you are having trouble
getting the important elements in focus, the answer is to put on a wider-
optics & focus 371
Figure 17.9. (top) In both of these
frames, the focal length and distance
from camera to subject are the same
but the f/stop changes In the top frame,
the lens is wide open, and the depth-of-
feld is very small; only one card is sharp
Figure 17.10. (bottom) The lens is
stopped down to f/11 and almost all
the cards are in apparent focus—mean-
ing that they only appear to be in focus
because they are within the depth-of-
feld Critical focus, the point at which
the lens is actually focused, is still on the
red king
angle lens and you will have greater depth-of-feld. Technically true, but in
practice, they then move the camera forward, so they have the same frame
size. The result? You end up with the same depth-of-feld you started
with, because you end up with same image magnifcation. It is image mag-
nifcation that is the critical factor. You are decreasing subject distance and
increasing image magnifcation, which decreases depth-of-feld.
HYPERFOCAL DISTANCE
For every focal length and f/stop, there is a particular focus distance that
is special: the hyperfocal distance. This is the closest focus distance at which
both objects are at infnity, and closer objects are in focus. When a lens is
set at the hyperfocal distance, everything from 1/2 of the hyperfocal dis-
tance to infnity will be in focus. There are two ways of defning hyperfo-
cal distance (Figure 17.13).
First: Hyperfocal distance is the focus setting of the lens when objects at
infnity and objects at the nearest point to the camera are both in accept-
able focus. Second: If the lens is set at the hyperfocal distance, both objects
372..cinematography:.theory.and.practice.
Figure 17.11. (top) In this series, the
f/stop remains the same but the focal
length changes With a wide lens (top)
all the cards are in focus
Figure 17.12. (bottom) With a very long
lens at the same f/stop, the depth-of-
feld only covers one card The camera
is the same distance from the subject;
only the f/stop has changed
point may be 10 inches or more in front of the flm plane. Thus, if you Diopter power
Focus
distance
Actual distance
from diopter
are shooting a close-up at the wide end of a zoom, it’s as if you were 10 of lens to subject
inches closer to your subject matter, which also reduces your depth-of- Infinity 78-3/4"
feld. Being closer you, of course, have less depth-of-feld. This is one of 25' 62-1/2"
+1/2 15' 54-3/4"
the reasons that zooms are seldom used in macro, table-top, and other situ- 10' 47-1/2"
MACROPHOTOGRAPHY Infinity
25'
39-1/2"
34-3/4"
For extreme close-up work (macrophotography), it is more useful to think +1 15' 32-1/2"
10' 29-3/4"
in terms of image magnifcation instead of depth-of-feld. Macrophotog- 6' 25-1/4"
raphy is any imaging where the image size is near to or greater than the 4' 21-3/4"
Infinity
actual size of the object (more than a 1:1 reproduction ratio). For example,
19-3/4"
25' 18-1/2"
photographing a postage stamp full frame is macro work. Regular prime +2 15' 17-3/4"
10' 16-3/4"
lenses can seldom focus closer than 9 or 10 inches; zooms generally have 6' 15-1/2"
set of problems all its own. The most critical aspect of macro work is the 25'
13-1/4"
12-1/2"
degree of magnifcation. A magnifcation of 1:1 means that the object will +3 15' 12-1/4"
10' 11-3/4"
be reproduced on flm actual size—that is, an object that is 1/2 inch in real- 6' 11-1/4"
ity will produce an image on the negative (or video sensor) of 1/2 inch. 1:2
4' 10-1/2"
will be 1/2 size, 1:3 will be 1/3 size, and so on. In flm, the 35mm academy Table 17.1. (above) Focus with diopters
frame is 16mm high and 22mm wide. Most lenses of ordinary design can
focus no closer than a ratio of 1:8 or 1:10, which is why specialty lenses
are needed for more extreme magnifcations.
EXPOSURE COMPENSATION IN MACROPHOTOGRAPHY
When a small image is being “spread” over a large piece of flm, it naturally
produces less exposure. With reproduction ratios of greater than 1:10,
exposure compensation is necessary. The formula for this is:
f/stop determined by meter
Shooting f/stop = 1 + magnifcation ratio
Example: meter reading is f/8. Your reproduction ratio is 1:2 or 1/2 size.
The calculation is 8/(1 + .5) = 5.3
DEPTH-OF-FIELD IN CLOSE-UP WORK
There are many misconceptions associated with macrophotography; per-
haps the most basic is that “wide-angle lenses have more depth-of-feld.”
Depth-of-feld is a function of image size, not focal length. While it is
true that wide-angle lenses have more depth-of-feld, the problem is that
once you have put a wider lens on, you still want the same image you had
before, and in order to accomplish that, you must move the camera closer
to the subject. Once you have done this, the depth-of-feld is the same as
it was before, since focus distance is also a determinant of depth-of-feld.
The important aspects are:
• Depth-of-feld decreases as magnifcation increases.
• Depth-of-feld decreases as focus distance decreases.
• Depth-of-feld is doubled by closing down the lens two stops.
CALCULATING DEPTH-OF-FIELD IN CLOSE-UP WORK
Calculation of depth-of-feld in extreme close-up work methods is dif-
ferent from normal situations. At magnifcations greater than 1:10, the
depth-of-feld is extremely small and it is easier to calculate the total
depth-of-feld rather than a near/far limit of focus.
CLOSE-UP TOOLS
Extreme close-up photography can be accomplished with a variety of
tools—diopters, macro lenses, extension tubes/bellows rigs, snorkels, and
specialized lenses.
optics.&.focus..377.
DIOPTERS
Diopters are lenses that are placed in front of the camera lens and reduce
the minimum focusing distance of the lens. The lenses are measured in
diopters, which is the reciprocal of the focal length as measured in meters.
A plus 1 diopter has a focal length of 1 meter; a plus 2 is 1/2 meter, and so
on. Minimum focusing distance with the lens set at infnity is determined
by dividing the diopter number into 100 cm. As an example, a +2 diopter
would be 100/2 = 50 cm. This equals 19.68 inches.
This spec shows you the farthest working distance you can work; put a
plus one-half on your normal camera lens, set it on infnity, the farthest,
Figure 17.16. (top) Bokeh is the blur and objects two meters away are in focus. Nothing farther could be shot
produced by out-of-focus point sources sharp. Put on a plus one and the max working distance is one meter. Put
in the frame It can be used to great aes- on a plus two and the object has to be 500 millimeters, or half a meter, or
thetic purpose, as in this shot from Kiss
Kiss, Bang Bang Diferent lenses have about 19 inches away (from the front of the diopter, not the flm plane)
diferent bokeh characteristics to achieve sharpness. All those examples are with the main lens (prime or
Figure 17.17. (above) The Revolu-
zoom) “set at infnity.”
tion snorkel lens system by Clairmont A split diopter is one of these magnifers split in half (Figure 17.15). It
Camera (Photo courtesy of Clairmont covers half your feld, and the stuf seen through the glass is focused closer,
Camera)
and the other half, which is missing ( just air), will be focused where the
main lens is set. Put a plus one-half split diopter on your camera. Focus the
main lens at infnity. One-half of the feld, through the diopter, is sharp
at 2 meters. The rest of the feld is focused at infnity. If you set the lens at
15 feet, the clear half is focused at 15 feet and the diopter half will focus at
1 1/3 meters. There’s a fuzzy line at the edge of the split diopter, and this
has to be hidden artfully in the composition:
• Use the lowest power diopter you can, combined with a longer
focal length lens, if necessary.
• Stop down as much as possible.
• There is no need for exposure compensation with diopters.
• When using two diopters together, add the diopter factors and
always place the highest power closest to the lens.
EXTENSION TUBES OR BELLOWS
The advantage of extension tubes or bellows is that they do not alter the
optics at all, so there is no degradation of the image. Extension tubes are
rings that hold the lens farther away from the flm plane than it normally
sits, thus reducing the minimum focus distance.
A bellows unit is the same idea but is continuously variable. Either will
give good results down to about 1:2. Extension tubes are incompatible
with wide-angle or zoom lenses. Optically, the best results at very high
magnifcations are obtained by reversing the lens (so that the back of the
378..cinematography:.theory.and.practice.
lens faces the subject) and mounting on a bellows unit. The simple rule is,
to achieve 1:1 reproduction, the extension must equal the focal length of
the lens. For 1:1 with a 50mm lens, for example, you would need a 50mm
extension.
A variation of this is the swing-and-tilt mount (Figure 17.18), which gives
the lens mount the same kind of controls used in a view camera. The lens
cannot only be extended for macro work, but the plane of focus can also
be tilted. This permits part of the image to be in focus and part of the
image on the same plane to be out of focus.
MACRO LENSES
Macro lenses are actually specially designed optics, optimized for close-up
work. They are good in the 1:2 to 1:1 range. Some macros have barrel
markings for magnifcation ratio as well as focus distance; this facilitates
calculating the exposure compensation.
Figure 17.18. (above, top) A full swing-
SNORKELS AND INNOVISION and-tilt system (Photo courtesy of Cen-
tury Precision Optics)
Several types of snorkel lenses are available that are like periscopes. They
generally allow for extremely close focus and for getting the lens into Figure 17.19. (above, middle) A Cine-
wand snorkel lens being used for a macro
incredibly small spaces (Figure 17.19). Some are immersible in water. Inno- shot on a food commercial (Photo cour-
vision is a snorkel-type lens for extreme close-up work. It has the advan- tesy MakoFoto)
tage of an extremely narrow barrel that can reach inside very small areas, Figure 17.20. (above, bottom) Testing
even inside fowers. focus on a checkout day or setting back
focus on the set requires a good focus
target such as this one from DSC Labs.
SPECIALIZED LENSES
Specialized applications of the snorkel are the Revolution system and Frazier Figure 17.21. (left) The Cinefade is a
motorized adjustable ND flter for the
lens (Figure 17.17). These have remarkable depth-of-feld that seems to lens; this can be used to change the
defy physics and also allows for the lens itself to rotate, pan, and tilt during depth-of-feld in the middle of a shot
The small “binoculars” above the lens is
a shot. It is possible to have objects that are actually touching the lens in a rangefnder, but is not in use (Photo
focus and still maintain usable depth in the distance. courtesy Cinefade)
380..cinematography:.theory.and.practice.
18
set operations
MAKING IT HAPPEN
The working relationship between the director and cinematographer is
the key to getting a flm made. Along with the production designer, they
are the people on the set responsible for creating the look and feel of the
project. Let’s look at the responsibilities of everyone involved, frst of all in
a typical feature flm. These procedures are general to most types of pro-
duction including commercials, music videos, and on small productions
such as industrials and documentaries; some of these are omitted, but the
essentials are the same.
In relation to the camera work, the director has a number of duties. It
is the director who makes the decision as to what shots will be needed to
complete a particular scene. He or she must specify where the camera will
be placed and what the feld of view needs to be. Some directors prefer to
specify a specifc lens, but most just indicate to the DP how much they
want to see, and then the camera person calls for the lens (or focal length
on a zoom) to be used.
The director must also specify what camera movement, zooms, or other
efects will be needed. Most directors do all of this in consultation with
When I meet a director, I try to frst the DP and ask for ideas and input. Problems often arise when a new direc-
talk about emotions and what the tor feels he must make every decision by him/herself. One thing is beyond
story means for the director What
I want from a director is passion; I question—the lighting of the scene is the cinematographer’s responsibility,
want to do projects that are impor- the director should indicate the look, tone, and feel they want for a scene
tant to the director Because then but should never call for a specifc lighting plan or certain lights to be used.
it’s personal and it matters Every
decision about a movie, about cin- One of the most common situations is when directors ask for long, com-
ematography, about light, about plex dolly or Steadicam moves. It can be very efective and dramatic to
camera placement, is emotionally shoot an entire scene in one shot, with the camera moving constantly with
important And after all, what mat- the characters even as they go from room to room or make other types of
ters in life, I think, is what you feel
So movies are represented emo- moves. However, these types of shots are generally difcult to set up, dif-
tions I try to put those emotions fcult to light (since you are so often forced to hide the lights), and usually
into images For me, that’s the main very demanding for the focus puller. They also require many rehearsals
approach and many takes to get all the elements to work together: the timing of
Rodrigo Prieto actors’ moves, the timing of camera moves, changes in focus, and in some
(Babel, The Wolf of Wall Street) cases changes in T-stop. Lighting is much more complex because it is like
lighting for multiple cameras with very diferent positions: it is difcult to
make the lighting work well for both cameras and hide all the equipment.
Long, complex shots are exciting to conceptualize and great fun when
they are completed successfully. Also, it sounds so quick and convenient
to just “go ahead and get the whole thing in one shot.” The problem
is that almost inevitably, the shot gets cut up into pieces anyway, with
inserts, close-up, or other coverage. This means that time and efort spent
to accomplish it were largely wasted.
Unless you absolutely know that the continuous take will be used, it
is often better to break it up into logical pieces. At the very least, bet a
couple of cutaways in case the long take is boring, technically fawed, or
otherwise unusable. This is not in any way to say that you shouldn’t try for
long, continuous takes—just that you need to be aware of the dangers and
difculties. Certainly, flms like Birdman use them to extraordinary efect.
Ideally, the director should arrive on the set with a complete shot list.
This is a list of every shot and every piece of coverage needed for the scenes
on that day’s shooting. Some directors are extremely well prepared with
this, and others let it slide after the frst few days, which is a mistake. It is
true that shot lists are often deviated from, but they still provide a starting
point so that everyone in all departments is headed in the same direction.
THE DIRECTOR OF PHOTOGRAPHY
Every director has a diferent style of working: some will be very specifc
about a certain look they want and exact framing, while others want to
focus on working closely with the actors, and staging the scenes and leave
it to the DP to decide on exact framing, camera moves and the lighting
style, fltration, and so on.
382..cinematography:.theory.and.practice.
Ultimately the director is the boss; he or she may work in whatever fash- Figure 18.1. A typical working set In
ion they wish. A professional DP should have the fexibility to work with the foreground is an electrician (light-
ing technician) with various colors of
a director in whatever manner they choose. Ultimately it is up to the DP electrical tape for marking distribution
to deliver for the director the kind of look and visual texture they are cables (Photo courtesy Owen Stephens
at Fill-Light)
looking for and ensure that the director and editor have all the footage
they need and that it is all editorially usable. The DP’s responsibilities are
numerous. They include:
• The look of the scenes, in consultation with the director.
• Directing the lighting of the project.
• Communicating to the gafer and key grip how the scene is to be
lit: specifc lights to be used, gels, cuts with fags, silks, overheads,
difusion, and so on. Directing and supervising the lighting pro-
cess.
• Coordinating with the production designer, wardrobe, makeup,
and efects crew concerning the overall look of the flm.
• Filtration on the camera.
• Lenses: including whether to use a zoom or a prime lens (though
this may sometimes be the director’s call).
• Ensuring that there are no ficker problems (see the chapter Light-
ing Sources.
• Being constantly aware of and consulting on issues of continuity:
crossing the line, screen direction, and so on.
• Being a backstop on ensuring that the director hasn’t forgotten
specifc shots needed for good coverage of the scene.
set.operations..383.
Figure 18.2. Cinematographer Art • Supervising their team: camera operator, the camera assistants,
Adams takes an incident reading for a the electricians, the grips, and any second camera or second unit
night exterior scene Deciding what f/
stop to set the lens at is one of the DP’s camera crews; also the data wrangler and DIT.
most important responsibilities (Photo • Watching out for mistakes in physical continuity: clothing, props,
courtesy Adam Wilt)
scenery, and so on. This is primarily the job of continuity and the
department heads, but the eye looking through the lens is often
the best way to spot problems.
• Specifying the specifc motion picture flm raw stock(s) or type of
video camera to be used and any special processing or the work-
fow for video footage.
• Determining the exposure and informing the First AC what
T-stop to set on the lens.
• Ensuring that all technical requirements are in order: correct flm
speed, shutter angle, and so on.
Typically, when starting to light and set up a new scene, the assistant direc-
I think people just see cinematog- tor will ask for an estimate of how long it will take to be ready to shoot.
raphy as being about photography This is not an idle question, and it is very important to give an accurate
and innovative shots and beautiful
lighting We all want our movies to estimate. The AD is not just asking this to determine if the company is on
look great visually, to be beguiling schedule: there is another important consideration. She has to know when
and enticing, but I think that what to start putting the actors through the works. This means sending them
really defnes a great cinematogra- through makeup and wardrobe; this may also be referred to as when to
pher is one who loves story “put them in the chairs.”
Seamus McGarvey Many actors do not appreciate being called to the set a long time before
(The Avengers, the crew is ready to shoot, and in addition, if they have to wait, their
Atonement, Anna Kareninina) makeup might need to be redone, and so on. It may also afect the timing
of rigging special efects.
THE CINEMATOGRAPHER’S TOOLS
Light meters were discussed in Exposure. Generally, DPs carry two meters
with them: the incident meter and the spot meter (refectance). The Sekonic
meter (Figure 18.2) combines both functions, but there are many types
of light meters in use on sets. A color meter is also useful but usually not
carried on the belt (or around the neck or in a pocket) all the time. Adam
Wilt’s Cine Meter II (Figures 8.37 and 8.38 in Exposure) actually performs
all three functions with the addition of the Luxi dome for incident readings.
384..cinematography:.theory.and.practice.
GAFFER GLASS
A viewing glass, called a gafer’s glass (Figure 18.3), is an important tool.
It allows you to look directly into a light without being blinded. This is
important when focusing a light: a standard procedure when setting lights
is to stand (or sit) where it needs to be aimed, then looking into the light
with the viewing glass. This allows you to see precisely where it is aimed
so you can direct the electrician in trimming it up/down, left/right, and
spot or food. Focusing the lights might be done by either the DP or the
gafer. The viewing glass can also be used to look at the sun to see if clouds
are coming in. This can also be done by looking at the refection in a pair
of sunglasses.
LASER POINTER
Once you stand on the studio foor pointing and waving your arms trying
to communicate to someone on a ladder or on a catwalk exactly where
you want something rigged, you will appreciate the ease and precision of
a laser pointer. Figure 18.3. A gafer’s glass (neutral
density viewing flter) with a China ball
refected in it (Photo courtesy E Gus-
DIRECTOR’S VIEWFINDER tavo Petersen)
A director’s viewfnder allows you to see how a certain lens will portray the
scene without looking through the camera. There are two types of direc-
tor’s fnders. The frst is a self-contained optical unit that can be “zoomed”
to diferent focal lengths to approximate what a lens will see. Far more
precise is a viewfnder/camera mount that allows you to mount the actual
lenses you will be using; however, the camera lenses won’t be available on
a location scout or tech scout.
DIGITAL STILL CAMERA
A digital still camera (most of them also do video) is useful for location
scouts and also on the set. DPs frequently shoot a still then manipulate it
in software to get a “look” that they show the director so they can make
decisions about the scene. This can be used to communicate to the dailies
colorist exactly what you’re going for with the look.
THE SHOT LIST
The director’s shot list serves a number of functions. It lets the DP and
the assistant director better plan the day, including possibly sending of
some electricians and grips to pre-rig another location. It also helps the DP As important as it is to learn the
in determining what additional equipment should be prepped, and how techniques of cinematography, you
also have to learn how to deal with
much time is reasonably allowable to light and set the shot within the the movie set, with show business
constraints of what needs to be done that day. Even if the shot list doesn’t I came up with a cinematographer
get followed step by step, it will often at the very least provide a clue as who is very talented, but she was
to what style of shooting the director wants to employ: is it a few simple never quite able to handle every-
thing else you have to do—deal-
shots for each scene or detailed and elaborate coverage or perhaps a few ing with the producer and the crew
“bravura” shots that emphasize style and movement? and the time frame that you have
In addition, it is very helpful in serving as a reminder for the director, to follow
the DP, the assistant director, and the continuity person so that no shots or Maryse Alberti
special coverage are missed. One of the gravest production errors a director (Hillbilly Elegy,
can make is to wrap a set or location without getting everything needed. The Wrestler, Get Over It)
Reshoots are expensive, and there is always the possibility that the loca-
tion or the actors will not be available to correct this mistake. Although
all these people assist in this, it is the director’s fundamental responsibility
to “get the shots.” This is far more important than being stylish, doing
fancy moves, and so on. None of these matter if scenes are not completed
and usable. In video, the absolute most basic rule is to never leave a loca-
tion until you have checked the footage for problems, performance, and
continuity.
Even if not in the shot list, some directors will charge the script supervi-
sor with keeping a list of “must haves.” This is especially useful for cut-
aways or inserts that might easily be forgotten. It is also helpful for “owed”
shots. “We owe a POV shot from the window,” is a way of saying that
set.operations..385.
there is a shot that is part of this scene that we are not shooting now, but
we have to pick it up while at this location.
Prep is an important part of every production, no matter how large or
small. On small to medium feature flms, a very general rule of thumb is
at least one day of prep for every week of planned flming, but on larger
productions and especially ones that are heavy on stunts, efects, and VFX,
the DP’s prep may extend to many months.
During this time, the DP will be consulting with the director on the
look, the method of shooting, how many cameras are to be used and the
general “feel” of the flm that they are going for. Of course, location scouting
is a huge part of prep—both selecting locations and then revisiting them
to think about what needs to be done to most efectively work at that loca-
tion. Prep includes:
• Reading the script.
• Talking to the director.
• Sharing images, screening flms.
• Talking to the production designer.
• Location scouts and tech scouts.
• Meeting with the Gafer, Key Grip, and First AC.
Figure 18.4. (top) Attaching a flter to
a lens when there is no matte box or
the flter won’t ft the matte box can be PUTTING THE ORDER TOGETHER
accomplished by making a daisy out of
camera tape (1-inch cloth tape) • Camera.
• Lighting.
Figure 18.5. (bottom) The flter
attached to the tape This is a last resort • Electrical (up to the gafer and best boy).
for when you don’t have a matte box
that will hold the flter Be very careful— • Gels/Difusions (DP selects favorites, gafer adds a “standard
flters are expensive! package”—CTO, CTB, common difusions, etc.)
• Practical bulbs.
• Special gags (lighting rigs).
• Grip equipment (mostly up to the key grip).
• Special gear requested by the DP.
READING THE SCRIPT
As you read the script, there are a number of things you are doing. First
It would be wrong to take a painter of all, you need to really understand the story. Not the plot, but what is
and ask him to paint in a certain the real inner meaning of the story—this will be an important clue to how
style from the past It would be it wants to be portrayed visually. Second, you’ll need to understand the
wrong to ask a cinematographer to
photograph a flm in the same style logistics of the project: is it mostly studio or locations? Are the locations
as another picture because you can pretty standard or are they far away, difcult, or involve a lot of travel and
never do that The same elements, transportation? Weather conditions? Underwater or near water? Aerial
the same history, does not exist in shots? Do they need to be fxed-wing aircraft, helicopter, or drone shots?
the same way as it did previously
But you can reference past work in Driving shots? Are there lots of stunts? Extensive VFX (visual efects)
order to be more clear with yourself work? All of these will help you get started thinking about what equip-
about where you want to go and ment will be needed, what cameras or flm stock is needed, how much
what you want to do crew is to be booked, how much second unit, and so on—some of these
Vittorio Storaro issues will not be decided yet, but you need to start forming ideas about
(The Conformist, these issues and giving input on what may be the best way to approach
Apocalypse Now, these from the standpoint of cinematography and logistics.
The Last Emperor)
TALKING TO THE DIRECTOR
Another important part of prep is “getting on the same page” with the
director. It will usually involve lots of discussions about the script, the
overall plan and, of course, the “look.” Most directors will have some
visual ideas about how they want the flm to look; some will be vague and
unformed, some will be very specifc. An excellent way to talking about
it without the need to get specifc is to discuss analogies and metaphors.
Probably the most commonly used technique is to watch flms and talk
about them. Watching entire flms or clips can be key to this process.
386..cinematography:.theory.and.practice.
Hopefully, the director won’t show you a flm and say “I want it to look Figure 18.6. A typical set in operation:
exactly like this.” Similar to this is to look at photographs. Some directors dolly and camera are at lower left center,
video village is under the blue tent (note
and DPs keep look books which consist of pictures torn from magazines, or the black backing to prevent glare on
images downloaded from the net. the monitors), DP and director’s moni-
tor is at the center with 4x4 foppies
Referring to great paintings or painters is also extremely useful. The side and back At the left, a 12x16 solid
production designer will sometimes be part of these conferences as well. provides negative fll for the set (Photo
courtesy Heavy Horse Productions)
This is particularly important if sets are being built. The DP will need to
explain the lighting and grip needs for the set. An excellent example of
this is a space movie. Sets for spacecraft are almost always very tight and
cramped and this is going to make traditional lighting techniques difcult
or impossible. Often the lighting opportunities and sometimes the actual
lighting need to be built in as part of the set: glowing panels, monitors,
wall lights, overheads; all of these may be actual lighting sources or just
serve as motivation for added units that are hidden around the set.
LOCATION SCOUTS AND TECH SCOUTS
Hopefully, the DP will have some input on selecting locations, but that
doesn’t always happen—sometimes they have already been chosen. How- The advice I got the first day I
worked in the film business: Always
ever, it is up to the DP to say so when they spot potential problems with be five minutes early to work, never
a location, especially ones which mean that the location may not yield the five minutes late But more impor-
kind of look the director is going for, either for weather, time-of-day, or tantly, live on the edge when it
logistical reasons. A location scouting form is shown in Figure 18.8. comes to your photography—take
risks Put your ideas on film and fall
On the actual location scout, it is important to think beyond just “how it down a few times; it will make you
will look.” It is also critical to think in terms of transportation, climate fac- a great filmmaker
tors, and even things like loading and accessing equipment—these factors Salvatore Totino
are the primary responsibility of the gafer, key grip, sound mixer, transpo (Everest,
coordinator, and, of course, the location manager, all of whom must be on The Da Vinci Code)
the scout. If at all possible, try to scout the location at roughly the same
time of day and conditions you will be shooting—week day vs. week end,
for example. Once locations have been chosen, there will be tech scouts.
COORDINATING WITH OTHER DEPARTMENTS
Besides their own crew, the DP must also coordinate with other crew
members. The frst is the production designer. If sets are being built or
extensively dressed, it is essential that the DP look at them while they are
still in plans or sketches, not after they are built. The DP will discuss with
the designer what lighting opportunities will be part of the set. These
include windows, skylights, doors, and other types of places to hide lights
or bring light into the set.
set.operations..387.
Figure 18.7. What appears as a simple Also to be discussed are the practicals—that is, working lights that are
shot on the screen may be quite com- part of the set, and whether they are hanging lamps, wall sconces, or table
plex when you look behind the scenes
The opposite can also be true (Photo lamps. A good set dresser will usually have a couple of spare table lamps or
courtesy Oliver Stapleton, BSC) hanging lights on the truck. These can be invaluable either as a lighting
source themselves or as a “motivator” of light. It may be up to the electri-
cians to wire them up, however.
The situation may call for “wild walls,” which are walls of the set that
can be removed to make room for the camera, dolly track, and other
equipment. Finally, it is important to consider not only the set, but how
it will be positioned on the stage. There might be a window or glass wall
that would be a great lighting opportunity, but if it is only a few feet away
from the wall of the stage, it may be difcult or impossible to use it. On
the set, the DP is in constant communication with the assistant director
concerning the schedule: how much time is left before the actors go into
overtime, and so on.
Before the shooting begins the AD makes the schedule indicating what
scenes will be shot on what days and a one liner, which is a one line descrip-
tion of each scene. The schedule also indicates whether scenes are day or
night, interior or exterior, whether the day scenes are to be shot during
the day or night and vice versa. This is essential for the DP in planning
what equipment and supplies will be needed.
The call sheet for the day lists what scenes will be shot, what actors are
involved, if there are stunts, etc. At the beginning of each day, a produc-
I think that a lot of the best flms tion assistant or second AD will hand out sides. These are copies of the
are collaborations by people with script pages to be shot that day that have been reduced to a quarter of a
a shared vision Someone has an
idea which is turned into a script page, so that they can be easily slipped into a pocket. The sides are the
A director and cameraman get “bible” for the day (along with the call sheet). Of all the principles of flm-
together and interpret that script making perhaps the most important of all is that everyone must know
Production designers, location
managers, costume designers, what is going on and is working, as they say, “on the same page.” Com-
makeup artists and other people munication is the key—nothing can derail a production quicker than poor
play roles We all need each other communication. This is why everyone should have a copy of the script
Everyone contributes pages for the day.
Phil Meheux During shooting, the DP is under a great deal of pressure and is thinking
(Casino Royale, about a dozen things at the same time; complete focus and concentration
Edge of Darkness)
are essential. One of the unwritten rules of flmmaking is that only certain
people talk to the DP: the director, of course, the First AD, the First AC,
the gafer, and the grip. A working set is no place for idle chitchat, or as a
great AD used to say, “Tell your stories walking.”
388..cinematography:.theory.and.practice.
Location Scouting Form
Figure 18.8. This location scouting
form shows some of the issues to think
Production Title: ________________________________Date of Scout:_____________________ about when you are visiting locations
The initial search is done by a location
Scene#s_________________________________________________________________________ scout or production person, but when
Location Name & Address:_________________________________________________________ it’s down to fnal choices, the direc-
tor, AD, DP, gafer, key grip, and sound
Contact Name and Phone:__________________________________________________________ recordist must do the fnal survey
Key Held By: __________________________________ Phone: ____________________________
Availability: _______________________________________________________________________
Restrictions: ______________________________________________________________________
Bathrooms: _______________________________________________________________________
Potential Problems:__________________________________________________________________
___________________________________________________________________________________
Director: DP:
• Put everything back in cases, arrange to your liking.
D.I.T. / Data: Contact #:
• Check that you have all expendables.
• Go through your personal kit, make sure you have all tools and
Format: ARRIRAW (.ari) Canon (.mov) EPIC (.r3d) ProRes (.mov) RED MX (.r3d) Other
Record Media: Compact Flash Recorder Mag SSD Mag SxS Other:
Project Frame Rate: 23.98 25 29.97 FPS / Varispeed: supplies (often the rental house has them for sale).
• Second AC may take home batteries to charge overnight.
Shutter Speed: Shutter Angle: ISO/EI:
Kelvin / WB: 3200 5400 other: Int. Audio: 2ch 4ch TOD T.C.
Scene Take Clip/File Comment Some other tasks that will be part of a proper camera checkout day:
• Learn how new rigs work and pre-confgure them.
• Organize equipment within cases in a way you prefer.
• Troubleshoot and fx hardware problems with rented gear. The
rental house staf will be glad to assist you and repair or replace
anything that isn’t working properly.
There will be very few opportunities to get together as a camera depart-
ment on such a relaxed schedule. When you’re on set, there is little or no
time to work these things out. Being prepared is a big responsibility for all
crew members!
THE TEAM
The DP has three groups of technicians who are directly responsible for:
the camera crew, the electricians, and the grips. The DP also coordinates
with the art department and, of course, the AD.
LIGHTING TECHNICIANS (ELECTRICIANS OR SPARKS)
The lighting crew carries out the DP’s instructions on lighting. The crew
www.fotokem.com/digitalcamreport 2801 W Alameda Ave. Burbank, CA 91505 (818) 846-3101
Figure 18.19. A digital camera report consists of the gafer (the head of the crew), the second electric, and electricians
(Courtesy FotoKem) (sparks in the UK). US unions now use the terms Chief Lighting Technician,
Assistant Chief Lighting Technician, and Lighting Technicians. The second electric
used to be called the best boy—no longer preferable for obvious reasons.
In addition to the full-time set crew, there may be day players—additional
crew on an as needed basis. For preparing sets and locations before the set
crew arrives, a rigging gafer and electricians may also be booked. We’ll
talk about second unit and splinter units a bit later—these might be very small
or sometimes nearly as large as the main crew, especially for very large
stunts or special efects shots. For more on the lighting crew, see Motion
Picture and Video Lighting, by the same author.
GRIPS
The grip crew is headed by the key grip. His assistant is the best boy grip or
second grip. Then there is the third grip and whatever additional grips are
needed. Grips are sometimes referred to as hammers as one of their duties
is to build platforms, rigs, and bracing that are not part of the sets. The
key grip may push the dolly, or there may also be a dolly grip whose sole
responsibility is to push the dolly. In the US, the grip crew has a very wide
range of duties:
• The grips handle all C-stands, high rollers, and so on, and what-
ever goes on them: nets, fags, frames, etc. This includes any form
of lighting control or shadow making that is not attached to the
light itself—nets, fags, and silks.
• They also handle all types of mounting hardware, specialized
clamps of all types that might be used to attach lights, or almost
anything else anywhere the DP or gafer needs them.
• They handle all bagging (securing lights and other equipment
with sandbags). They may also have to tie-of a stand or secure it in
another way in windy or unstable conditions.
400..cinematography:.theory.and.practice
Camera Log
Production: Director: DP: Date: Shoot Day __ of __
• They deal with all issues of leveling, whether it be lights, ladders, Figure 18.20. A digital camera report
or the camera. Their tools for this are apple boxes, cribbing, step form
blocks, and wedges.
• They handle all dollies, lay all dolly track, and level it. Also, any
cranes are theirs to set up and operate. This is especially critical
when a crane is the type that the DP and First AC ride on. Once
they are seated, the crane grip then balances the crane by adding
weights in the back so that the whole rig can be easily moved in
any direction. Once this has been done, it is absolutely critical
that no one step of the crane until the grips readjust the balance.
• The grips are also in charge of rigging the camera if it’s in an
unusual spot, such as attached to the front of a roller coaster, up
in a tree, and so on.
• The grips build any scafolding, platforms, and camera rigs. They
may assist the stunt men in building platforms for the airbags or
mounts for stunt equipment.
• The grip crew and the key grip, in particular, are in charge of
safety on the set, outside of anything electrical, which is, of
course, handled by the electricians, and stunts, which are directed
by the stunt coordinator.
This is the prevalent system in the United States; in other countries (and
other areas where the so-called “English system” is used) it is handled dif-
ferently—the electricians (sparks) handle all lighting-related issues such as
nets and fags, and so on, and the grips are primarily responsible for dollies
and cranes.
OTHER UNITS
Three other types of teams will also be under the DP’s control: second unit,
additional cameras, and rigging crews. Second unit is an important function.
Usually, it is used for shots that do not include the principal actors. Typi-
cal second unit work includes establishing shots, crowd shots, stunts or
special efects, and insert shots.
set.operations..401.
Figure 18.21. The tool belt is where the ADDITIONAL CAMERAS
AC carries many of the tools and sup-
plies needed on the set This would be In the case where additional cameras are used in principal photography,
the rig for a Second AC as it has various they are designated B camera, C camera, and so on. On a big stunt that
colors of camera tape (1” paper tape) cannot be repeated such as blowing up a building, it is not uncommon for
for making marks for the actors (Photo
courtesy Cris Knight) a dozen or more cameras to be used. Some are used as a backup to the main
camera in case of malfunction, some are just to get diferent angles, some
run at diferent speeds, and so on. Particularly on crashes and explosions,
some may also be crash cams and will either be in a reinforced housing or
be “expendable.” GoPros are a widely used option for this application.
SECOND UNIT
Second unit may be supervised by a second unit director, but often the team
consists only of a second unit DP, one or two camera assistants, and pos-
sibly a grip. It is the duty of the second unit DP to deliver the shots the
director asks for in accordance with the instructions and guidance of the
director of photography. It will often be up to the DP to specify what
lenses and settings are used. It is the DP’s name that is listed in the credits
as the person responsible for the photography of the flm; the audience
and the executives will not know that a bad shot or a mistake is due to the
second unit. A variation on second unit is called splinter unit for specifc
kinds of shots that aren’t stunts or normally second unit. VFX units may
also handle shots intended for visual efects, such as background plates for
greenscreen/bluescreen.
STUNT CAMERAS
Some cameras are “crash cams,” small expendable cameras that can be
placed in dangerous positions. On digital shoots, DSLRs and GoPros are
often used as crash cams due to their low cost. Is it a problem that a DSLR
isn’t going to have the same visual quality as a high-end camera? Not usu-
ally, primarily because the kinds of shots a crash cam is intended to get
generally play on screen only briefy—often just a few frames. This also
means that a number of diferent cameras with diferent inherent “looks”
will need to be edited together—this is one of the primary reasons for
ACES system we talked about in Color.
402..cinematography:.theory.and.practice.
LOCKED OFF CAMERAS Figure 18.22. The camera cart should
be as close to the set as possible (Photo
In some cases, cameras are locked of, often because it is simply too dangerous courtesy Cris Knight)
to have an operator in that position. In these cases the camera is mounted
securely, the frame is set, and the focus and T-stop are set. They are then
either operated by wireless control, remote switch, or an AC turns on the
camera and moves to a safe position. Crash boxes may be used if there is
danger to the camera. Polycarbonate (bulletproof “glass”) or pieces of ply-
wood with a small hole for the lens might also be used. Of course, small
lock-of cameras such as GoPros are now everywhere on some shots.
RIGGING CREWS
Many sets need preparation ahead of time. Rigging crews perform tasks
such as running large amounts of cable for electrical distribution, setting
up scafolding, rigging large lights, and other jobs that it wouldn’t make
sense for the shoot crew to do while the rest of the crew (makeup, hair, set
dressers, etc.) waits. These crews are run by a rigging gafer and rigging key
grip, and the size of the crew varies widely depending on the needs of the
production.
SET PROCEDURES
The way a movie is shot is the result of decades of trial-and-error that
arrived at some universal principles that have become the standard proce-
dures for shooting a scene. These methods are surprisingly similar around
the world with some exceptions.
BLOCK, LIGHT, REHEARSE, SHOOT
The most fundamental rule of flmmaking on the set is Block, Light, Rehearse,
Shoot. It’s simple. It’s the smart and efcient way to get through a scene.
BLOCK
First the director blocks the shot—not only showing the actors their move-
ments, but also indicating the camera positions or moves they want. The
Second AC will also set marks for the actors at this stage.
set.operations..403.
LIGHT
Once that is done, the AD announces “DP has the set” and both actors and
director step aside while the cinematographer works with the gafer and
key grip to light the shot with lighting stand-ins.
REHEARSE
When the DP is ready, the AD calls for “frst team” and the director and
actors do the serious rehearsals for the scene.
SHOOT
Once this is done, it’s time to actually shoot the scene. At this point, the
director is in charge and the entire crew stands by for any last minute
changes or alterations in the plan.
THE PROCESS
Generally the lighting for the scene will be roughed in before this process
commences based on a general understanding of the scene as described by
the director. This may range from everything fully rigged and hot for a
night exterior to only having the power run for a small interior. One of
the most important things the DP needs to know at that point is what will
not be in the shot—where it is safe to place lights, cables, etc. After block-
ing, the DP is ready to begin lighting seriously, the steps of production are
reasonably formalized. They are as follows:
• The director describes to the DP and AD what shot is. At this
stage, it is important to have a rough idea of all of the coverage
needed for the scene, so there are no surprises later.
• The director blocks the scene and there is a blocking rehearsal.
• Marks are set for the actors. The First AC might choose to take
some focus measurements at this time, if possible.
• If needed, focus measurements are taken by the First AC, assisted
by the Second AC. Preferably with stand-ins.
• The AD asks the DP for a time estimate on the next setup.
• The AD announces that the “DP has the set.”
• The DP huddles with the gafer and key grip and tells them what
is needed.
• The electrics and grips carry out the DP’s orders. Lighting stand-
ins are essential at this point. You can’t light air!
• The DP supervises the placement of the camera.
• When all is set, the DP informs the AD “camera is ready.”
• The AD calls frst team in and actors are brought to the set.
• The director takes over and stages fnal rehearsal with the actors
in makeup and wardrobe.
• If necessary, the DP may have to make a few minor adjustments
(especially if the blocking or actions have changed), called tweak-
ing. Of course, the less, the better, but ultimately it is the DP’s
responsibility to get it right, even if there is some grumbling from
the AD.
• The DP meters the scene and determines the lens aperture. When
ready, he informs the AD.
• The director may have a fnal word for the actors or camera opera-
tor, then announce that he or she is ready for a take.
• The AD calls for last looks and the makeup, hair, and wardrobe
people to make sure the actors are ready in every detail.
• If there are smoke, rain, fre, or efects, they are set in motion.
When everything is set, the AD calls “lock it up” and this is repeated by
production assistants to make sure that everyone around knows to stop work-
ing and be quiet for a take.
404..cinematography:.theory.and.practice.
• The AD calls roll sound. Figure 18.23. The Second AC pulls a
measuring tape to set the focus dis-
• The sound recordist calls speed (after allowing for pre-roll). tance (Photo courtesy Vertical Church
Films)
• The AD calls roll camera.
• The First AC switches on and calls “camera speed.” (If there are
multiple cameras it is “A speed,” “B speed,” etc.)
• The First AC or operator says “mark it,” and the Second AC slates,
calling out the scene and take number.
• When the camera is framed up and in focus, the operator calls out
“set.”
• When applicable the AD may call “background action,” meaning
the extras and atmosphere begin their activity.
• The director calls “action” and the scene begins.
• When the scene is over, the director calls “cut.”
• If there are any, the operator mentions problems she saw in the
shot and any reasons why another take may be necessary or any
adjustments that may make the physical action of the shot work
more smoothly.
• If there is a problem that makes the take unusable, the operator
may say something or in some cases stop the shot (it’s up to the
director’s preferences). The operator may call out “boom” if the
mic is in the shot. Do it between dialog.
• If the director wants another take, the AD tells the actors and the
operator tells the dolly grip “back to one,” meaning everyone resets
to position one.
• If there is a need for adjustments, they are made and the process
starts over.
Most directors prefer that they be the only ones to call cut. Some ask
that if something is terribly wrong and the shot is becoming a waste of
flm, the DP or the operator may call cut and switch of the camera. The
operator must be sure they know the director’s preference on this point.
The operator will call out problems as they occur (best to do it in between
dialog so as not to ruin the sound track), such as a microphone boom in the
shot, a bump in the dolly track, and so on. It is then up to the director to
cut or to continue. If you as the operator are in a mode where you call out
set.operations..405.
problems as they occur, you certainly want to do it between dialog, so that
if the director does decide to live with it, the dialog editor can edit around
your comments. Most directors prefer that no one talk to the principal
actors except themselves. For purely technical adjustments about marks
or timing, most directors don’t mind if you just quickly mention it to the
actor directly—but only if it will not disturb their concentration.
ROOM TONE
Easily the most annoying thirty seconds in flmmaking, rolling audio room
The advice I got the frst day I tone is an absolute necessity. It is used to fll in blanks or bad spots in the
worked in the flm business: always audio track. Room tone must be recorded in the same environment as the
be fve minutes early to work, never
fve minutes late But more impor- original audio. It is very easy to forget to record room tone. If the audio
tantly, live on the edge when it mixer is recording it, the camera crew only needs to be quiet while it’s
comes to your photography—take rolling; if audio is being recorded onto the camera sound tracks, then it
risks Put your ideas on flm and fall should be slated.
down a few times; it will make you
a great flmmaker You can either write “tone” in the take number slot or write it on the
back of the slate as “room tone—Int. Dave’s Living Room.” Writing it on
Salvatore Totino
(The Da Vinci Code, Everest)
the front of the slate is probably better as it shows the scene number. This
is important as “Dave’s Living Room” might be shot for diferent scenes,
some day, some night, some with trafc outside, some (hopefully) without
trafc noise. The audio editor needs to know this information in order to
relate this particular room tone with the right production audio. If room
tone is being recorded to camera, it’s useful to frame the mics in the shot
after slating as a quick visual reference for the editor.
SET ETIQUETTE
You’ll fnd that a great deal of what we call “set etiquette” is really just
common sense, good manners, and understanding how the professional
working environment operates:
• Show respect for your fellow crew members and their jobs at all
times. Be respectful of actors, and background people.
• Absolute silence during a take!
• Make sure your cell phone is of—duh!
• Do not give the director or DP “advice” on how to shoot the
scene. The director will not be impressed if you speculate on how
Hitchcock would have done it.
• Don’t talk to the director or DP unless they speak to you.
• Don’t ofer your opinion on anything about the scene, unless it is
something you are responsible for.
• Nobody talks to the DP except the director, operator, First AC,
and DIT (and they know how to pick the right time). If a crew
knows each other well and it’s a lighter moment on the set, the
occasional joke is not out of place.
• Nobody uses a light meter on set except the DP and gafer.
• Don’t crowd around the camera.
• Don’t crowd around the monitor. The only people who should
be looking at the main monitor are the director, the DP, and
maybe the producer or an actor.
• Don’t try to do somebody else’s job. There is a very real reason for
this. See the commentary below.
• Don’t get in the way of other people doing their job.
• Do not touch the camera unless you are the DP, the operator or
the First or Second AC, loader and sometimes the DIT. Never—
like seriously, never.
• Don’t pick up or touch the recording media.
• Do not ask to look through the camera. On some sets, even the
director will ask if it’s OK to look. A big deal? So are eye infec-
tions.
406..cinematography:.theory.and.practice.
• Do not touch anything on a hot set—one that has been dressed Figure 18.24. Second Electric (Assistant
and prepped and is ready for shooting, or one where shooting has Chief Lighting Technician) Josh Day plugs
in three-channel ficker boxes for a light-
taken place, and the crew is going to be coming back to that set. ing efect The Second Electric (which
• Don’t leave cups, water bottles, or food laying around. Never put used to be called the Best Boy) handles
all electrical distribution on the set
them down on a working or “hot” set.
• Put your initials on the cap of your water bottle.
• Call “fashing” when shooting a photo. This is so the electricians
don’t think that your camera fash means that a bulb burned out
in one of their lights.
• When a stunt is being run, never applaud until the stunt per-
former or stunt coordinator signals that the performer is safe and
not injured—usually a thumbs up. If you applaud and then it
turns out the stunt performer has been injured, you’re going to
look (and feel) like a real jerk.
• Don’t be in an actor’s sight line, especially Christian Bale.
• Don’t walk around or be distracting during a take.
• Some actors prefer that you not make eye contact with them at
all—respect the fact that they are extremely focused and concen-
trating on their job.
• Never try to chat with an actor uninvited.
• Never any of-color jokes or sexist remarks or anything crude; it’s
a professional environment—act accordingly.
• If you’re shooting on the street, several dozen people a day will
ask you “What are you shooting?” Standard response is “It’s a
mayonnaise commercial.” Nobody’s interested in mayonnaise.
Why is it such a big deal to not do somebody else’s job? You may have
heard stories about “There was a rope hanging down into the shot and
everybody had to wait for a union guy to come fx it.” It is not, as people
sometimes like to think “a union thing.” Sure, that does apply but it’s not
the real reason. Who does what is thoroughly defned by long-standing
practice for some very good reasons.
First of them is safety. Let’s take the rope example. Let’s say it’s the grip
department that is responsible for that rope in this case. Suppose a camera
set.operations..407.
assistant decides he’s going to be a hero and yank on the rope to get it out
of the way. What if it’s tangled around some heavy object which then
comes down on the main actor’s head? Bad news.
In this case, the grips would have known what it was, would have known
how to deal with it and also would have had the training to know that you
ask the actors and crew to move of the set before dealing with anything
over their heads. More importantly, the insurance company is not going
to be very interested in your colorful and amusing story about how the
wrong crew member pulled the rope!
Second, is the “we had to wait” part. Except in unusual circumstances,
there will be a grip right near the set who can deal with it. If there is a
delay, it’s usually going to be for a good reason, such as someone is bring-
ing a ladder so it can be done safely.
Figure 18.25 At the end of checkout Here’s another example: some C-stands are in the shot. The inexperi-
day, the cameras are in their cases and enced, frst-time director barks at a PA to “move those things.” The PA
labeled with colored camera tape to
indicate cameras “A,” “B” and “C ” (Cour- panics and moves them. It is unlikely that there won’t be a grip nearby to
tesy Sean Sweeney) handle it, but let’s assume that there isn’t. Those C-stands were put there
by the grips for a reason. Now, the grip comes back to the set and needs a
C-stand quickly—where are they? He can’t fnd them and then he looks
bad and worse, it causes a delay in getting the shot! To sum it up—every
type of task is assigned to one crew and one crew only—because it creates
an efcient and safe work environment. It’s just good common sense.
Art Adams puts it this way: “Be aware of your surroundings and never
get in the way. One way to spot a crew person in the real world, for exam-
Listen to your gut instinct and ple, is that they never stand in doorways, because that’s one great way to
believe in it And remember that prevent work from being done. Most crew, if they have nothing going on
the craft-service person on this job
might be the producer on the next at the moment, will stand near the set, somewhere behind the camera, out
Roberto Schaefer, ASC, AIC
of the way but close enough to hear what’s going on. They will always
(Monster’s Ball, face the set, even if they are having a quiet conversation about sports or
Stranger Than Fiction) whatever, and they will break that conversation of as soon as they see if
they are needed. This is not considered rude at all. As a rookie crew person,
you should always have something to do.”
SET SAFETY
Safety on the set is no joke. Movie sets are, quite frankly, potentially dan-
gerous places. People do get killed and seriously injured. Some general
safety rules:
• Don’t run on the set—ever.
• Never stand on the top of a ladder or even the second rung down.
Seriously, never. Never walk under a ladder.
• Production should always make sure fre extinguishers and frst
aid kits are available and easy to fnd.
• Be sure the name, phone # and location of the nearest hospital are
on the call sheet.
• Don’t operate any piece of equipment unless you are the person
who should be doing it.
• Wear your gloves.
• Don’t operate any piece of equipment you are not completely
familiar with.
• Wear safe shoes—no sandals or open toes.
• Don’t do anything with electricity unless you are one of the elec-
tricians.
• The key grip is also the chief safety ofcer of the set.
• Stunt coordinator has responsibility for safety on stunts.
• Never leave anything on a ladder.
408..cinematography:.theory.and.practice.
LIGHTING, ELECTRICAL, AND GRIP Figure 18.26. A student crew at work
on the set (Photo Courtesy New York
• Nobody touches anything electrical except electricians! Film Academy)
• Excuse talent from the set before working overhead. Don’t let
anyone stand under you when on a ladder or in the grid.
• Sandbags on all stands, both lighting and grip.
• Double, triple, or more bags on anything big or likely to catch
the wind.
• Safety tie-ofs on any stand or equipment that goes up very high,
especially if it’s windy.
• Safety cables on all lights and anything overhead.
• Call out “going hot” before energizing any electrical run.
• Point a light away or block it from others before striking.
• Say “striking” before switching on any light.
• Know your knots!
• All electrical equipment should be properly grounded and
checked by second electric.
• Wear gloves when operating lights, especially if they have been
on recently.
• Communicate warnings when carrying anything heavy. Loudly
say “points!” or “free dental work!”
• Wear eye protection at all times when called for.
CRANE SAFETY
• Only qualifed, experienced grips operate a crane, especially one
that carries people!
• Never get on or of a crane until told it’s safe by the crane grip.
Never.
• Wear your safety belt at all times on a crane.
• Never operate a crane anywhere near high-voltage electrical
lines—this is a frequent cause of death on movie sets.
set.operations..409.
Figure 18.27. Keeping track of all the SLATING TECHNIQUE
cables and making sure they are cor-
rectly plugged in is a big part of the At the time the AD calls “roll camera” and the AC announces “speed” (that
camera assistant’s job the camera is up to sync speed), the Second holds the slate where it is in focus
and readable and announces the scene and take number so that it will be
recorded on the audio track (more on this later). She then says “marker”
(so that the editor knows that the next sound they will hear is the clapper)
and slaps the clapper stick down sharply so that the clap is clearly audible
on the audio. She then quickly gets out of the way. In case adjustments are
necessary, such as changing focus from the slate to the scene, the operator
will call out “set” and the director can then call for “action.” Proper slat-
ing technique is essential; bad slating can cause the editor real problems,
frequently problems that end up costing the producer more money—and
you can guess how they feel about that.
Older slates could be written on with chalk. Most slates today are white
plastic, and an erasable marker is used (Figure 18.29), or they are digi-
tal and the timecode and other information are displayed electronically
(Figure 18.31). For erasable slates, the ACs frequently tape a makeup
powder puf to the end of the erasable marker; that way they will be
conveniently together in one piece for both marking and erasing. This is
referred to as the mouse. You can also buy erasers that ft on the end of a
slate marker, which is much more compact and efcient.
VERBAL SLATING
When sound is being recorded you also say the important information out
loud as you slate—this is called verbal slating. An example would be “Scene
27, Take 3, marker,” and then you clap the slate. If it’s a new roll of flm
or new media in a digital camera, you should also say the roll number.
After the initial slate, you don’t have to repeat it. If there are multiple cam-
eras, add “A slate,” “B slate” as appropriate. Frequently, the sound mixer
records the verbal information, instead of the camera assistant.
410..cinematography:.theory.and.practice.
Figure 18.28. An excellent example
of why everyone needs to know who
is responsible for what—the Digital
Loader slides a media card into the
camera Not all productions can aford
a Loader but on larger jobs they are
essential—not only to avoid over-taxing
the camera crew but also to make sure
that the person handling the media is
not distracted by other jobs There is no
more dangerous situation on a set than
holding two media cards in your hand
wondering “Which one is ready to go
in the camera and be erased and which
one has that very expensive stunt we
just shot?”
TAIL SLATE
In some cases, it is not possible to slate before the shot. In such instances,
a tail slate can be used (Figure 18.30). A tail slate (called an endboard in the
UK) comes at the end of the shot and must be done before the camera is
switched of or it can’t be used for synchronization. For tail slates, the slate
is held upside down. Best practice is to hold it in upside down to show it is
a tail slate and then turn it right side up so the editor can read the informa-
tion on the slate. The clapper is still used, and a head ID should be shot as
well, if feasible.
Tail slates should be avoided if at all possible. It is time consuming and
expensive for the DIT, colorist, or editor to roll all the way to the end of
the take, sync up, and then roll back to the beginning to lay the scene in. It
is also very easy to forget at the end of the take and switch of the camera
before the slate comes in. It is also important to note the tail slate on the
camera report.
MOS SLATING
In the case of shooting without sync sound (MOS), everything is the same
except that the clapper is not used. It may be necessary to open the clapper
slightly on a timecode slate for the numbers to be visible, and running, but
it is important not to clap it. If you do, the editor may spend time looking
for an audio track that does not exist. Hold your fngers between the clap
sticks so that it is very clear that no audio is being recorded as in Figure
18.29. Some ACs just keep the clappers closed to indicate MOS. Of course,
it should written on the slate as well.
SLATING MULTIPLE CAMERAS
Running multiple cameras on a shot has become more common. Action
scenes and stunts may have fve or more cameras. In slating with multiple
cameras, there may be a separate slate for each camera, clearly marked as
A, B, or C slate. The slating AC then calls out “A marker” or “B marker”
and claps the slate for that camera. An alternative is to use a common marker.
This is possible where all cameras are aimed at approximately the same part
of the scene or where one camera can quickly pan over to catch the slate,
then pan back to the opening frame. Then the director must wait for all
operators to say “set” before calling action.
set.operations..411.
Figure 18.29. A properly done MOS TIMECODE SLATES
slate (no audio recorded)—fngers
between the clap sticks make it clear Timecode slates include the usual information but also have a digital read-
to the editor that there will be no sync out of the timecode that will match the timecode on the audio recorder;
mark MOS is also circled at bottom they are also called smart slates. Timecode slates make syncing up much
right of the slate Some assistants indi-
cate MOS by keeping the slate closed, quicker and cheaper and particular shots are more readily identifed. The
but this has the danger of being seen reason it is faster and cheaper in telecine is that the colorist rolls the flm
as a just a mistake; also it won’t work
on timecode slates as they usually don’t up to the frame that has the clapper sync on it, then reads the timecode
run when the clappers are closed numbers that are displayed and types them in. The computer-controlled
audio playback deck then automatically rolls up to the correct point to lay
in sync. Having the deck fnd it automatically is signifcantly faster than
manually searching. If there is not sufcient pre-roll for the timecode on
the audio tape or if the slate is unreadable, then this automation fails, and
sync must be set up manually. The clapper is still used on timecode slates.
When the clap stick is up, the timecode will be displayed as it is running.
When the clapper is brought down, the timecode freezes, thus indicating
the exact timecode at the time of slating.
JAMMING THE SLATE
Jamming the slate is for digital slates that display the timecode (Figure 21.38
in Technical Issues). The timecode on the slate must agree with the timecode
on the audio recorder. The timecode on the audio equipment is used as the
master. The digital slate is plugged into the recorder with a special cable
and the timecode signal is “jammed” into the slate. Some slates will display
the words “Feed Me” when it is time to re-jam.
SLATING TECHNIQUES
• Always write legibly, large, and clearly on the slate.
• Make sure the slate is in the shot! A slate that is out of frame
doesn’t do anybody any good. Take notice of where the lens is
pointed and hold the slate in that line.
• Try to hold it at a distance so it nearly flls the frame, so it is read-
able! An unreadable slate is worthless. On a very tight shot, it’s
OK to roll the slate through the frame before clapping. The scene
and take numbers are what’s critical.
412..cinematography:.theory.and.practice.
• If you have to be very far away, such as a crane shot, sometimes Figure 18.30. Tail slates should only be
comically giant clapper sticks are used and there is no chance of used when absolutely necessary Proper
procedure is to put the slate in upside
slating the scene and take numbers. down at the end of the shot (before the
• Hold the slate as steady as you can—a blurred slate can’t be read by camera cuts) and then turn it over and
clap it This is so the editor can read the
the editor (Figure 18.35). Don’t start pulling the slate out while numbers
clapping—this will mean the exact moment of sync is blurred.
Hold the slate steady for a brief moment after clapping the sticks.
The main point is that the bottom doesn’t move. If both parts of
the slate are moving it’s very difcult to tell when the hit happens,
as the editor is looking for the precise frame where the slate closes
and both parts are free of motion blur.
• It is good technique to tilt the slate slightly down to avoid refec-
tions that will make it difcult to read.
• If the room is dark or you aren’t able to get the slate into the light-
ing being used on the scene, then the practice is to illuminate the Try to update the take number
slate with a fashlight. immediately after slating Become
a master at removing the cap
• Once you slate, get out of the way! Taking too long to get out of of your pen quietly during a
the way is annoying to the director, the actors, and the operator. take This helps a lot in the
Have a place picked out to go to—an “escape route.” Remem- event of a quick cut and restart
ber that you will need to get there quickly and then stand there Never take the slate away from the
making no noise until the end of the take. camera If you hang it on your belt
• Don’t wander of with the slate. If the First needs you do some- and walk away from the camera
thing, make sure the slate stays right with the camera so whoever and it rolls you’re going to be
embarrassed when the First AC
is going to slate doesn’t have to look for it. There is a slot at the yells for the slate and you have to
bottom of the front box for the slate and that’s the best place to run it over If you step away from
keep it (Figure 18.17). the camera, leave it behind so the
frst AC can grab it if necessary
• On the frst slate of a scene, you call out “Scene 27, take 1” (or
whatever the scene number is). For the rest of the scene, you only DP Art Adams
need to call “take 2” or “take 3,” and so on. You call out “roll
number xx” only when roll/media is changed. In the English
system call out the slate number, but don’t say “slate,” as in “376
take 3.” We’ll cover this system later.
• Always coordinate the correct slate numbers with scripty. The
script coordinator is the authority on what the slate numbers
should be.
set.operations..413.
• If the audio mixer has pre-slated the audio with the small mic on
the mixer, then you may not need to call out the slate numbers,
but be sure to check this out frst by talking to the mixer and
clearing it with the First AC.
• Right before you clap the sticks, say “marker.” This will help the
editor identify what sound is the actual sync mark and not just
some random noise on the set.
• If, for some reason, the frst clap isn’t valid (maybe the camera
wasn’t rolling), the First AC may call for “second sticks.” The
clapper then says “second sticks” and claps it again. The reason to
say this is that there are now possibly two claps on the sound track
and the editor may have trouble fguring out which one is the real
sync marker.
• Don’t put the slate in front of the camera until the First AD calls
“roll sound” or “rolling.” It varies—make an efort to learn how
your DP, Operator, and First AC likes to do it. Putting a slate in
blocks the operator’s view and can be very annoying if they are
not ready and still checking something in the frame. The Opera-
tor will call for “slate in” but you should already be there.
• When calling out the numbers, speak loudly enough that it can be
Figure 18.31. (top) A timecode slate understood on the sound track but don’t shout. The boom opera-
(Photo courtesy E Gustavo Petersen) tor may swing the microphone in to record you and the sync clap
of the sticks, but that isn’t always possible.
Figure 18.32. (above) A timecode app
for smartphones; in this case Mov- • Make sure that the slate is already in the frame when the camera
ieSlate begins rolling. Digital cameras use the frst image as a thumbnail
and this will make the editor’s job of fnding circle takes much
easier. Things that make the editor happy are good for your repu-
tation in the long run!
• Snap the sticks together smartly but not too loudly if you are
close to the actors. If the shot requires you to hold the slate close
to the actor’s face, clap the sticks gently! A loud noise is disturb-
ing to the actor at the moment they are most concentrated. Say
“soft sticks” so the editor will know.
• Update the slate numbers right away. The director may call cut
sooner than expected and need to “go again” right away. It also
prevents forgetting to update the numbers.
• While camera is rolling (“picture is up”) you can update your
camera reports and erase the take number to be ready for the next
take—as long as you can do it quietly and not distract the actors.
• Sometimes circumstances don’t permit slating at the beginning of
a shot; when it’s done at the end of the take, it’s called a tail slate
(Figure 18.30). When doing a tail slate, clap it upside down, then
quickly turn it right side up so the editor doesn’t have to stand on
her head to read it! Be sure to call out “tail slate” when you do it.
Only do tail slates when necessary.
• For MOS shots (no audio) be sure to hold your fngers between
the clap sticks (Figure 18.29). Some prefer to hold the sticks
closed all the time to signify an MOS shot, but fngers between
the sticks is unmistakable. The slate should also say “MOS.”
• For multi-camera shoots, you may need to slate each camera
separately, but the sync clap has to be common for all of them,
although sometimes separate claps are made for each camera—
that’s up to the editor and audio mixer to call. Cameras should
slate in order: A camera, then B camera, etc.
• Multiple camera work is done diferently in flm and digital.
When shooting flm you would roll a few seconds of the slate on
each camera (bumping a slate), then at the beginning of the take
you would call “A & B common mark.”
• Most Second ACs attach their blank camera report forms to the
back of the slate with a piece of tape. Some take shorthand notes
and write up the reports separately.
414..cinematography:.theory.and.practice.
Figure 18.33. An insert slate is used
for situations where a normal slate
would be difcult to ft in You can turn
a normal slate into an insert slate by
writing the scene and take in the take
box with a slash between them, such as
36A/1
RESHOOTS
A reshoot is when the crew has to go back to a scene that was previously
fnished. The problem reshoots pose to slating is that you are technically
shooting the same scene as before—so do you continue slating at the last
remaining take? The solution is to put an “R” in front of whatever shots
are being re-done. So say you are reshooting Scene 27A, it now becomes
Scene R27A.
SECOND UNIT
If additional shots are needed that don’t involve the principal actors, a
second unit is often employed. They might do simple scenes such as estab- When shooting digital, the slate
lishing shots, sunsets, cars driving on the road, or more complex scenes should always be in frame when the
camera rolls as the frst frame of the
such as car chases, explosions, or major stunts, in which case the second take ends up being the thumbnail
unit crew might be nearly as large as the main unit crew. Some produc- in the editing software That makes
tions may also have a splinter unit. Usually an “X” is written in front of the it easy for the editor to fnd a take
Always think about how to make
roll number on the slate and in the script notes and camera reports. things easy for the next person
down the line who will be working
VFX with footage that you help create
For visual efects shots, be sure to add a label for “VFX” or “Plate VFX” DP Art Adams
on the slate. A plate is a clean shot with no actors or action—it is to be used
as the background for computer efects or a greenscreen/bluescreen fore-
ground to be added later. Some other types of shots are also done specif-
cally for visual efects and should also be slated appropriately. Since these
shots will be handled by a separate facility, it’s a good idea for the VFX or
postproduction supervisor, the script supervisor, and the First AC to agree
on a naming format for these shots.
set.operations..417.
Figure 18.36. Sun Seeker software in INSERT SLATES
Flat mode (left), Augmented Reality
(middle), and Map mode (right) Sometimes the camera needs to be very close to the subject, such as a tight
insert of a pen on a desk. In these cases, normal size slate just won’t ft into
the shot. In these cases, we use an insert slate, which is a very small slate
with no clappers (Figure 18.33). If you do need a sync mark with an insert
slate, the most commonly used solution is to hold the slate still then gently
tap it on something in the frame—the tapping sound serves as the sync
mark. You can use a regular slate for this by writing the scene and take
number in the “take” slot; such as “36A/5.”
FINDING THE SUN
For all day exterior shooting, it is important to know where the sun is
going to be at any given time. It infuences when you might want to sched-
ule certain shots, when the sun might go behind a mountain, etc. On some
productions, it might even be useful to know where the shadows will fall
when you’re going to be on location a month from now, or when the
sun will be coming through the windows of a building you are scouting.
Often, it is important to determine these things at the time of scouting
the location and scheduling the shoot day, so you’ll need to know them in
time to work with the AD as they are doing the scheduling.
With sunrise shots, you need to know where the sun is going to come up
before you can see it. Here the problem is that many operators look at the
glow on the horizon and fgure that’s where the sun will pop up—they are
forgetting that the sun is traveling at an angle.
This makes accurate sun path prediction critical. Several smartphone or
pad apps are available that make this much easier. One of the best is Sun
Seeker by Graham Dawson. It has three modes (Figure 18.36), Flat mode
(left) shows a compass view with the sun’s position throughout the day,
including an indicator of where it is now. Augmented Reality mode (center).
It will also tell you, of course, exactly where the sun will be, by drawing
the sun’s path hour by hour and laying it over the live image.
Map mode (right) shows an overhead photograph of your location with
indicators showing the angle of the sun at diferent times of the day. A
slider at the bottom allows you to move ahead in time to select the day,
month, and time you’ll be shooting.
418..cinematography:.theory.and.practice.
19
dit & workflow
Figure 19.1. The importance of proper
labeling and an organized work station
cannot be overemphasized (Courtesy
of DIT Sean Sweeney)
DATA MANAGEMENT
In principle, data management sounds simple: you download the camera
media, put it on a hard drive, and back it up. In reality, it is a process
fraught with danger and chances to mess up badly. Handling the recorded
media is an enormous responsibility; a single click can erase an entire day’s
shooting!
The flm industry has spent decades developing methodical and care-
ful processes regarding the handling of exposed flm stock: loaders (the
member of the camera crew who loads the flm mags with raw stock and
then downloads the exposed footage) are trained in standard methods and
procedures; they keep extensive standardized paperwork and prepare the
flm in a prescribed way for delivery to the flm lab, which in turn keeps a
thorough paper trail of every roll of flm. On a digital shoot, the person
doing this may be called the loader, the data manager, media manager,
or data wrangler. With digital video, in some ways, the job is made more
difcult by the fact that we are dealing entirely with computer fles. Since
they are invisible, keeping track of them and what is on what digital media
calls for careful labeling and systematic organization.
BASIC PRINCIPLES
Certain core principles apply when handling digital media:
• Cover your rear.
• Have a standard procedure and be methodical.
• Maintain all logs. (See frst principle).
COVER YOUR REAR
Let’s talk about these in detail. When everything is going OK, people
rarely even notice what the loader is doing; it seems routine and auto-
matic. When something does go wrong, the entire production can turn
into a blame machine. You don’t want to be the person who ends up bear-
ing responsibility for a major disaster. The most important protection
against this is, of course, to not screw up, but it is also important to be
able to demonstrate that it wasn’t you. Film camera assistants have a long
standing tradition of immediately and unreservedly owning up to mis-
takes they make: DPs, ADs, and directors respect them for this. However,
if it really wasn’t you that messed up, you need to have the procedures and
paperwork to show it.
420..cinematography:.theory.and.practice.
FNF LOG 0926
CARD DATE FOLDER CAM DL # FILE SIZE CARD DESCRIPTION MASTER SHUTTLE TOTAL NOTES
GB
P2 9/26/11 FNF 0926 F 01 0001FI 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA 48.34
F 01 0002NG 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 0003YU 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 0004QY 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 0005VE 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 0007CI 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 0008GE 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 0010N0 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 0011UY 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 0012L2 857.3 MB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 000961 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
F 01 000637 4.26 GB 001 STRATEGY MASTER 01 SHUTTLE 01 JA
P2 9/26/11 FNF 0926 E 01 0001GA 370.1 MB 007 STRATEGY MASTER 01 SHUTTLE 01 JA 48.54
E 01 0003P8 4.26 GB 007 STRATEGY MASTER 01 SHUTTLE 01 JA
E 01 0004EP 4.26 GB 007 STRATEGY MASTER 01 SHUTTLE 01 JA
E 01 0005L8 4.26 GB 007 STRATEGY MASTER 01 SHUTTLE 01 JA
Some people interpret this rule as just being “avoid blame.” That’s not Figure 19.2. DIT Jillian Arnold keeps
the point at all. The real issue is make sure nothing goes wrong so that highly detailed logs for data manage-
ment The column labels: Card refers to
there is no blame to go around. Protect the production from mistakes and the source (P2 cards in this case)
you’ll be hired again and recommended to others. If you do screw up but
Date is day of the shoot Folder refers
immediately own up to it, the DP and production will still know that you to the parent folder which is in turn
are someone who can be trusted as part of the production process: mis- labeled with the Production Name and
takes happen—the point is to fx them and prevent them from happening Date DL# is the chronological down-
load order of that particular card File is
again. the fle name which most cameras gen-
erate automatically Description refers to
STANDARD PROCEDURES the segment of the show
As camera assistants have learned over the decades, the number one way Master lists what Master Drive the mate-
to ensure against mistakes is to have a methodical and organized way of rial is ingested to Check is a way to verify
that the fle made it onto that drive
doing things and then do it the same way every time. Observe any good crew
of camera assistants working: there is almost a ritualistic aura to the way Shuttle describes which Shuttle Drive the
they work. They are also very focused and attentive to every detail at all fles was transferred to and the check
box is for verifcation JA are her initials
times. Their repetitive actions are practiced and reliable. Their procedures Total GB is the total card/folder size
are standardized industry-wide. Notes is self-explanatory She emails this
report every night to the producer and
post production supervisor At end of
MAINTAIN YOUR LOGS shooting she sends a fnal comprehen-
Film camera assistants have quite a few forms that need to be flled out and sive report
kept up to date: camera reports, flm stock inventories, camera logs, and so Jillian’s philosophy is “keep the process
on. In the digital world, we are lucky that a good deal of this work is done pure, time stamp everything and be
by the camera and the various software applications we use immediately obsessive about the procedures Pro-
ducers need to understand that it’s a
after the camera. Most downloading apps (such as ShotPut Pro, Silverstack, special skill set like loading mags is You
Double Data and others) also create logs that can track the fles. Many load- don’t just trust anyone with your fresh
ers also maintain separate logs either handwritten or more commonly as footage ”
spreadsheets (Figure 19.2).
PROCEDURE—BEST PRACTICES
By far the biggest danger is accidently erasing data which has no backup—
this fear hangs over any operation where media is handled. Practices vary
between diferent data managers and may be adapted or changed for vari-
ous productions if the producer or insurance company requires certain
procedures but they all have one basic goal: ensuring the safety of recorded
data by clearly marking what media is empty and what media has recorded
data on it.
One fundamental principle is that there should be one and one only
person on the set who is allowed to format the media. This addresses
the most basic of all dangers in digital workfow—being absolutely and
unfailingly sure that the data has been downloaded and backed up and so
it is safe to format. You certainly don’t want to have conversations on set
like “Is this ready to format? I thought Danny already did that.” There
is no room for uncertainty. Designating one person to be responsible for
formatting helps keep this under control. It is not the entire solution of
course; a rigid adherence to a standardized procedure is necessary also, just
as camera assistants have always done.
LOCKED AND LOADED
One typical method is for the second AC to remove the media (whether
If cards are formatted in the camera, SSD or card) from the camera and immediately engaging the record lock tab
there should be some indication (if there is one). It is delivered to the DIT or data manager locked. After
that they are ready For example,
most camera assistants on com- downloading, it is returned to the AC with the record lock still engaged.
mercials will write the roll number This way only the AC (the designated formatter, in this case) is authorized
on a piece of tape, and when the to disengage the lock, put the media back in the camera and format the
card comes out of the camera the card. This is one method only and diferent DITs and camera crews will
piece of tape is placed on the card have their own way of doing this. Naturally there are variations on this
across the contacts, so it can’t be
inserted into the camera If the tape procedure, such as when the camera crew doesn’t want to take the time
is removed, then it is assumed that to format media. This varies by the type of camera you are using. For
it has been backed up and can be example, it is very quick and simple to format a drive with the Alexa;
reformatted It falls on the data on the other hand, formatting media for the Phantom takes quite a bit
manager to remove that tape—or
not of time. What is important about this process is not so much who does it
as it is that it be an established procedure understood by everyone on the
crew and that it be religiously observed at all times. Remember the basic
religion of being a camera assistant: establish procedures and do it the same
way every time—be methodical!
One method is this: if cards are formatted in the camera there should be
some indication that they are ready. For example, most camera assistants
on commercials will write the roll number on a piece of red tape, and
when the card comes out of the camera the piece of tape is placed on the
card across the contacts, so it can’t be inserted into the camera. If the tape is
removed, this indicates that it has been backed up and can be reformatted.
It falls on the data manager to remove that tape.
GET YOUR SIGNALS STRAIGHT
Without a doubt, the greatest fear is that someone might erase/format a
card or hard drive that has footage on it that has not been stored elsewhere.
There is no DIT, loader, or camera assistant on earth that has not had this
nightmare. The protection, as always, is to develop procedures, make sure
the whole crew knows what they are and then stick to them. Is this card
ready for formatting? There are many systems but perhaps the most reli-
able is to use paper tape, always available as the Second AC carries several
colors of 1” paper tape. Typically, green means “It’s OK to format.” Also,
say something like “This drive is ready for formatting.” Red tape means
“Not Ready for format.” Keep the red tape on it until you are absolutely
sure it is fnished and you have two tested backups.
422..cinematography:.theory.and.practice.
Figure 19.4. A shuttle drive for a TV
show Labeling is critical in data man-
agement (Courtesy Evan Nesbit)
Use only paper tape for marking cards and SSDs, not camera tape or
gafer tape. Camera tape can leave a sticky gum residue on the media and We usually have a bag that stays
who wants to put that into a camera? There are no hard and fast rules; it is by camera with two colour-coded
compartments—Green for mags
whatever the crew agrees on. The important thing is consistency and com- that are ready to be used, and Red
munication. Many people make it a practice to not only mark the cards for ‘hot’ mags if a take runs long
but to add a verbal signal as well, such as “these cards are ready for format- and you’ve got to quickly reload it
ting.” Always putting them in a consistent location is important too. This refects better on the camera crew
if you can do it immediately rather
might mean a small box on the DIT cart or something similar. than having to run to the DIT cart
ALWAYS SCRUB Stu McOmie
UK Camera Assistant
Make it a habit to always scrub through (preview) the footage, even if only
at high speed. A visual check is the only way to be certain the footage is
good. You can also be watching for other problems—if you catch some-
thing no one else noticed, be sure to let them know. It is always best for a
production to know about problems right away, when a reshoot is not a
huge problem, as it will become once they have wrapped that location or
set. Standard procedure is:
Download>Scrub to check>Mark as ready for format.
Do not scrub through the original media. This has two potential prob-
lems: one, having done that, you may think that you have downloaded the
footage and two, it is what really gets downloaded to the hard drives that
matters.
THREE DRIVES
Most DITs consider three copies of the footage to be a minimum. Hard
drives die; fles get corrupted. Backups are your only protection. Hard
drives that are used to transfer the footage to post, archives, or the pro-
duction company are called shuttle drives (Figure 19.4). As an example, the
three storage hard drives might be:
• One for the editor.
• One backup for the client/producer.
• One backup for you (so you can be the hero when something bad
happens).
An alternate process might be:
• All fles on the DIT’s RAID drives.
• Shuttle drive of all fles delivered to the producer.
• Shuttle drive of all fles delivered to the post house.
The VFX people may also need fles delivered separately—they may also
need them transcoded diferently. Obviously, the DIT can erase all drives on
the DIT cart once the shuttle drives have been delivered to the producer/
post house and checked, but many DITs prefer to keep the fles live on
their hard drives as long as possible (meaning until the next job) just as an
emergency backup. To be prudent, production and the post house should
also make backups of all fles as soon as they are received; it’s just common
sense.
Some productions do not allow the media to be erased until it has been
confrmed by the post house; often this is a requirement of the insurance
company. Some insurance companies will not give clearance to format
the media until it has been transferred to LTO tape and stored in a vault.
Obviously, this requirement has a big efect on the amount of media (SSD
drives, Phantom mags, compact fash, SxS cards, and so on) that need to
be ordered and also has meaning for the planning of the DIT/post work-
fow. This again points to the importance of having a preproduction plan-
ning meeting which gets all of the parties together to work out the details
of how it’s going to be done. Since downloading drives takes time, some
DITs request that they get the media before it is flled up completely.
Some crews make it a practice to only fll up drives half way for this reason.
DO NOT DRAG AND DROP
One principle that is universal no matter what procedures are in play:
never “drag and drop.” Anyone who uses computers is familiar with the
idea of grabbing a folder or fle with the mouse and dragging it to a new
location. It’s simple enough but for a couple of reasons, it is to be avoided
with video fles. Some cameras produce video fles that are far more com-
plex than just “fle.one, fle.two,” etc. There are often associated fles in
addition to the video clips. It also does not do redundancy checks as does
software designed for this purpose.
LOGS
Logs are something you normally don’t think about much, but they can be
Assuming you're using the usual very important. Most fle copy applications can generate logs which record
software it'll be doing checks in the every detail of the copy and backup process. Some logs can be remark-
background, but I like to manually ably verbose, recording details that you might think you’ll never need;
look at the checksum before scrub-
bing I'd also always watch a few however, it is this thoroughness that may save your career some day. These
seconds (of action!) in real-time are the kinds of things that really diferentiate the professional from the
I've been in a few situations with wannabe.
issues I wouldn't have spotted by Logs usually only come into play when there is a corrupted or lost fle;
just scrubbing alone—including
one with where I wasn't listened to, when this happens, it can often turn into a blame game and for a DIT or
and it resulted in a week of reshoots loader to lamely say “I’m pretty sure I copied that fle,” just isn’t good
six months later! enough—the log is your backup and paper trail. In a professional situ-
Stu McOmie ation, it is critical to have the software generate logs and that you keep
UK Camera Assistant them or to make your own. To have the download software do it usually
involves a selection in the preferences section of any application, you need
to select that the software will generate the logs and that you keep track
of what folder they are kept in. You’ll want to backup this folder and in
many cases, provide copies of the logs to the editor, the producer or even
the insurance company. Beyond the computer fles downloading software
generates, most DITs and loaders maintain separate logs of all operations,
either handwritten or on a computer or tablet, as in Figure19.2.
424..cinematography:.theory.and.practice.
Figure 19.6. Proper naming of fles
is critical The application A Better File
Renamer partially automates this pro-
cess
FILE MANAGEMENT
Download everything—this is crucial; so make sure you have plenty of hard
drive space at the start of the shoot (see the hard drive storage calculators
elsewhere in this chapter). Picking clips can be dangerous unless you have
a specifc plan or instructions to dump rejects—in general, it’s a very dan-
gerous idea. Retain fle structure! Don’t change fle names or locations,
particularly with .r3d (Red) fles. Notes can be kept in “Read Me” text
fles, which are easy to keep right with the fles on hard drives.
FILE NAMING
It is crucial to establish a consistent fle naming convention for every
project. The cameras themselves generate fle names that are orderly and
useful. In the end, the editor is likely to have the fnal say on fle naming
as they are the ones who have to deal with the long-term consequences of
how the fles are named and organized. Again, this shows the importance
of that preproduction meeting, phone calls, or exchange of emails—the
editor and VFX people need to be part of that conversation.
DOWNLOAD/INGEST SOFTWARE
There are several software applications specifcally for downloading and
backing up video/audio fles within the specifc needs of handling data on
the set. In some cases, they can also transcode the fles. All of these appli-
cations ofer safeties, such cyclic redundancy check (CRC), an error-detecting
code that detects accidental changes in raw data. They also ofer a range of
choices for these checks. Most also allow naming of reels/mags and auto-
matically incrementing those numbers with each download.
SHOTPUT PRO
This application is widely used for downloading recorded media on the
set (Figure 19.7). It is specifcally designed for this purpose and ofers all
the options a loader or DIT will normally need: multiple copies to dif-
ferent hard drives, logs, and several diferent methods of fle verifcation.
Versions are available for both Mac and Windows operating systems. The
company describes it like this: “ShotPut Pro is an automated copy utility
application for HD video and photo fles. ShotPut Pro is the industry de-
facto standard of-loading application for professionals. The simple user
interface and robust copy speeds make it indispensable for today’s tapeless
HD workfows.”
SILVERSTACK
Pomfort’s Silverstack does media download but also quite a bit more (Figure
19.8). Silverstack can ingest all camera media types and all fle formats.
It can do checksum-verifed, high-speed backup to multiple destinations.
According to Pomfort, “Ingesting source material and creating backups of
media from a digital flm camera is a repetitive, but responsible task.
dit & workflow 425
PROPRIETARY DATA MANAGEMENT SOFTWARE
Figure 19.7. (above) Selectable options
for ShotPut Pro All camera companies make some sort of fle transfer software for the types
of media their cameras use, and they make this software available for free
Figure 19.8. (right) An ofoad report download. Arri, Sony, Red, Canon, Panasonic, Blackmagic, and others
from Pomfort Silverstack
have software applications with various capabilities. Some are basic data
management and some, such as RedCine-X Pro, have color correction and
transcoding as well.
EXTERNAL RECORDERS
Most cameras are engineered to record clips internally, although not all
of them will record their full resolution in RAW form internally. Sony
cameras have long used SxS cards (pronounced “S by S”). Resolution and
frame rates sometimes exceed the capacity of Compact Flash and even
SxS cards. The Alexa is a good example of this; only Quicktime fles are
recorded directly on the camera—on dual SxS cards which can easily be
downloaded onto a laptop and external drives. In order to record Arri-
RAW fles, an of-board recorder can be used (an internal XR drive is also
an option). The Codex is produced specifcally for this purpose and is engi-
neered to work with various cameras. It is fairly small and light, and even
Steadicam operators fnd it easy to work with.
In dealing with hard drives, whether spinning drives or solid state, it
is important to remember that there are only two types of hard drives:
those that have died and those that are going to die. This is what makes
backups and redundancy so critical. The same applies to fash storage as
well—media fails, it happens all the time. Expect it. Be ready for it. When
you calculate how many hard drives you will need for a job, include some
redundancy for potential drive failure.
TEN NUMBERS: ASC-CDL
As cinematographers have a great deal at stake with how images are cap-
tured, processed, altered and shown, the American Society of Cinematographers
made an early move to ensure that the director of photography doesn’t
have to see their images put on a hard drive then just wave goodbye and
hope for the best. During production when there is little or no chance of
the DP having time available to supervise the production of dailies (either
from flm or from digital acquisition), they saw a need for an organized
system for the cinematographer to convey their ideas to post people in
some way other than vague statements such as “this scene was intended
to have deep shadows and an overall sepia tone.” For years, flm cinema-
tographers have communicated with the colorist doing their dailies every
night by any means possible: written notes, stills taken on the set, photos
torn from magazines—anything they could think of. How well it worked
depended on the communications skills of the DP and also on the night
shift colorist. Of course, it’s not the most experienced people who work
the night shift doing dailies. Viewing on the set monitors often involves
applying a Rec.709 based LUT, which has a far more restricted gamut than
what is actually recorded. See Figure 10.21.
426..cinematography:.theory.and.practice.
GOODBYE, KNOBS Figure 19.9. Typical workfow with the
In large part, this efort to give the cinematographer tools to send their ASC-CDL system
image ideas down the line to the post people is an efort to address the
fact that as we move toward cameras that shoot RAW and log, the DP no
longer has “the knobs” to make adjustments to the look of the picture in
the camera. As we will see, the ACES workfow brings all cameras into a
unifed color space.
Fortunately, the digital workfow opened up not only new ways for post
to screw up our images, it also created new possibilities to maintain some
control over our vision of what the image was intended to be. To facilitate
control of the image after it leaves the set, the ASC Technology Committee
under chairman Curtis Clark, devised the ASC Color Decision List (CDL).
In operation, it allows the interchange of basic RGB color-correction
information between equipment and applications made by diferent man-
ufacturers. Although the basic controls of most color-correction systems
are similar, they all difer somewhat in specifc implementation and in the
terminology used to label various controls or aspects of the image. The
terms Lift (for dark tones), Gain (highlights), and Gamma (mid-tones) are
commonly used by most color-correction systems, but those terms inevi-
tably vary in detail from company to company (see The Filmmaker’s Guide
to Digital Imaging by the same author for more detail).
To avoid confusion with already existing systems, the committee decided
on a set of three functions with unique names: Ofset, Slope (gain) and Power
(gamma). Each function uses one number for each color channel so the
transfer functions for the three color components can be described by nine
parameters. A tenth number was added for Saturation and it applies to all
channels. ASC-CDL does not specify a color space.
The ASC Color Decision List is a system designed to facilitate that “transfer
of the idea”—it’s about giving the cinematographer the tools to send their
decisions about the look down the line to the editor, colorist and other
links in the production chain. It addresses the fact that the tools that are
used in post-production are not often available on the set and in fact, at
the time you’re shooting, it might not even be known what those tools are
going to be. It’s entirely possible that diferent software and hardware will
be used at every step of the process, in reality, it would be very difcult
to ensure any uniformity of machines or software, particularly in features
where dozens of diferent post houses and efects facilities are involved.
Close coordination and supervision is necessary all through the process
and that the various tools be capable of working with the CDL data. In
order to be efective for a production, it is important to test the workfow
before production begins and that the entire team agrees on the color space
the CDL is being applied to and used for monitoring. It applies to both
log, linear and gamma encoded video.
RAID
Shuttle Drives
to Producer
RAW Files
Data
Camera
Management Transcoding Shuttle Drives Archiving
Viewing
Software to Post on LTO
Files
Finished
VFX Shots
LUTS/ CDL/LUTS
Look Files For Post Shuttle Drives
to VFX
Dailies
Correction Dailies for
Monitors Director/Producer
on the Set A typical work˜ow for digital cameras.
Color Correction Software Work˜ow will vary for di°erent cameras and
job requirements of each production.
432..cinematography:.theory.and.practice.
20
power & distro
The set electrician need have only a basic knowledge of electricity but
must have a frm grasp of the fundamentals. For our purposes we must
Signal strength
start with an understanding of the four basic measurements of electricity:
volts, amps, watts, and ohms. From there we will look at the issues that con-
Time front a set electrician and briefy review standard set operating procedure
for the electrical crew.
Wavelength Electrical current occurs when a positively charged pole is connected to
Single-phase electrical supply a negatively charged pole. In direct current (DC) it fows from negative to
positive. Direct current is what you would get from a battery, for example
but there are other sources of DC such as generators and power supplies.
The electricity available in most homes and buildings is AC (alternat-
ing current). It fows in one direction, then reverses. In the US, it does
this reversal 60 times a second. In Europe and other countries, it reverses
Three-phase electrical supply 50 times a second. This cycle is measured in Hertz (Hz); thus US mains
power (the electricity available on the main power grid of the country)
is 60 hertz. DC is simple: only two wires are needed. A “hot” wire and a
Figure 20.1. Diagrams of single-phase ground.
and three-phase electrical service
MEASUREMENT OF ELECTRICITY
For use on the set, we only need be concerned with the four fundamental
measurements of electrical power: volts, watts, amps, and ampacity.
POTENTIAL
Electricity is measured in terms of potential. That is, how much power (or
“work”) it is capable of doing. Potential can be thought of as “pressure.”
Electricity fows from a source of high potential to low potential. Electri-
cal potential is measured in volts or voltage (named for Alessandro Volta,
inventor of the chemical battery).
A water analogy works well for electricity (Figures 20.2 and 20.3). Imag-
ine a big water tank with a pipe leading out of it to a lower tank. The
amount of water in the upper tank determines the pressure. If there’s a
lot of water, there’s high pressure in the pipe; not much water and the
pressure goes down. The water pressure is like voltage. Clearly, the size of
Figure 20.2. (below, top) Two water the pipe infuences how fast the water fows and how much of it can fow
tanks and a pipe are an analogy for volt- through in a certain time. The size of the pipe is resistance. In electricity, it
age or electrical potential is measured in ohms. The fow of water is the same as current in electricity.
Figure 20.3. (below, bottom) Two water Current is how much total fow of electrons there is. Current is measured
tanks that are equally full and at the in amperes or amps.
same height have no electrical poten-
tial—no water will fow The pipe might be very large, but if there is not potential in the circuit (if
the upper tank is dry) then there will be no current. The size of the pipe is
how much can fow if there is potential. The maximum capacity the pipe
can safely carry is the same as ampacity. All electrical cables and connec-
tors are rated for their ampacity, but this depends on outside factors such
as operating temperature. Too much water fowing (too much pressure)
might make a pipe burst. Too much amperage fowing on a wire or con-
nector (exceeding ampacity) will make it overheat and melt or start a fre.
For example, an ordinary household wall plug is rated for 20 amps. That
is how much current it can carry without overheating and melting or caus-
ing a fre. The wire that goes to the wall plug is usually #12 wire—which
has an ampacity of 20 amps. Europe and most of the world operates with
220 volt electrical supply. The US uses 110 volt power supply for most
residences and ofces. Higher voltage is used for industrial uses and for
transmitting power over long distances. The transmission towers you see
out in the country are usually 13,000 volts or more, for example. Higher
voltage is easier to transmit long distances with minimal loss. Voltage is
rarely constant. It can vary throughout the day according to the load on
the system. This is why voltage often drops a bit in the summer when
everyone has their air conditioning on. The total amount of work done is
measured in watts. For example, with light bulbs—a 200-watt bulb deliv-
ers a lot more light than a 40-watt bulb. So here are our basic terms:
• Volts or voltage = electrical potential.
434..cinematography:.theory.and.practice.
Three-Phase System Single-Phase System
(120/240 AC) (120/240 AC)
Hot Legs Hot Legs
Yellow
Red (Black) Blue Neutral Ground Red Blue Neutral Ground
have the pins arranged in a circle. One or more of the pins has a nib on the
end that creates a positive lock when given a fnal twist.
BULL SWITCHES
A bull switch is a portable disconnect switch with breakers for each hot
leg. Most rental houses now have variable disconnects with electronic
breakers that can be set for a wide variety of settings (160-400A). A bull
switch should be put in every distribution system, usually right after the
tie-in or generator. In some cases you may want it near the set where it can
be used as a master switch to shut everything down.
FEEDER CABLE
Feeder cable runs from the power source to the distribution boxes. The
real workhorse of the business was #2 welding cable—for safety reasons,
welding cable has been replaced by entertainment cable, which is essen-
tially the same thing but double jacketed to stand up to the abuse of being
on set. Rated at 115 amps, it is compatible with the most common fuse
boxes and distribution boxes (generally 100 amps per leg). It is fexible and
easy to work with. Most often terminates in Cam-Lok connectors. For
larger service #0 (one ought) and #00 are frequently used. For the really
big jobs, such as a 225 amp Brute Arc, #0000 is the largest size most rental
houses carry. These heavy cables come as individual wires, in 25, 50, and
100 foot lengths. Picking up a 100 foot length of #0000 is a heavy lift.
DISTRIBUTION BOXES
Distribution or distro boxes (sometimes called D-boxes) accept feeder cable
inputs (in the US, most commonly Cam-Lok) and has output connectors
for sub-feeder such as 100 amp or 60 amp Bates. Most distro boxes are
“pass-through” which means that there are also outputs for the same size
feeder as the inputs, thus allowing other distro boxes on the same line
(Figures 20.25, and 20.27). Outputs for the Bates connectors must also be
fused. As a rule, any line must be fused for the smallest load it can handle.
In this case, since each output goes to 100 amp Bates, that output must also
be fused for 100 amps, regardless of the size of the input cable.
LUNCH BOXES, SNAKE BITES, AND GANGBOXES
At the end of a Bates cable, it is often necessary to have some Edison con-
nectors. To do this, the most convenient solution is a lunch box, which
consists of a 100 amp Bates input and 100 amp Bates pass-through (Figure
20.27). The output is fve Edison circuits, duplex connectors. Snake bites
go directly from Cam-Lok or pin plug to Bates outputs, usually 100 amp
(Figure 20.29). A gangbox is 100 amp or 60 amp input with four or fve
Edison connectors. A four way is a 20 amp Edison input with four Edison
outputs (Figure 20.33).
power.&.distro..445.
Figure 20.21. Cam-Lok 5-wire banded
feeder cable—thee hot legs (red,
blue, black), one neutral (white), and
one ground (green) Banded cable is
usually #2 wire Some rental houses
supply feeder cable with the neutral
and ground connectors reversed to
reduce the chance of either of them to
be plugged into a hot line, which would
result in an extremely hazardous situa-
tion (Photo courtesy Gencable)
EXTENSIONS (STINGERS)
Lines that terminate in Bates, Cam-Lok or pin plug can be connected
directly into the distro box. Large lights (5K and above) usually have one
of these types of connectors. Smaller units are Edison and go into a Bates
connector, Cam-Lok, or pin.
If the head cable of the light doesn’t reach to the box there are several
options. To extend out to a light that has a Bates, Cam-Lok, pin plug,
or other type of connect we need sub-distribution. Bates cables are the
most commonly found for this. For lighter loads (up to 20 amps) we can
use single extensions. A beefer cousin of the household extension cord, it
consists of a length of 12/3 (three conductors of #12) with Edison plugs.
It is also called a single, single extension, a stinger, or a 12/3.
ZIP EXTENSIONS
Occasionally, you may need to power small devices such as a prop table
lamp. For this zip cord is often used, especially if you need to hide it or
make it blend in with the set. Zip is what an ordinary household exten-
sion is made. It is either 18/2 or 16/2 wire that comes in several colors,
including black, brown, and white. For larger uses, 12/2 and 12/3 are also
available and capable of carrying 20 amps.
PLANNING A DISTRIBUTION SYSTEM
The primary challenge of planning is to get the power where you need
it. The basic trick is to get enough stage boxes (or whatever distribution
boxes you are using) so that your fnal cabling to the lights is reasonable.
The most common ways to get into trouble with distribution are:
• Not enough distro boxes so that a web of extensions, stingers, and
gang boxes cross and recross the set.
• Not enough secondary distribution, (e.g. Bates).
• Enough power but not enough break-outs: gang boxes, lunch
boxes, or Bates outlets, depending on which system you are using.
As much as any other part of lighting, planning a distribution system
requires thinking ahead, anticipating variations, and allowing slack for the
unexpected.
BALANCING THE LOAD
Once the power is up, one major responsibility remains, balancing the
load. Simply put, keeping a balanced load means keeping a roughly equal
amount of current fowing on all legs, whether it be a three-wire or a four-
wire system (not counting the ground line which makes them four-wire
or fve-wire systems). If a system is rated at 100 amps per leg and the load
446..cinematography:.theory.and.practice.
Lunch box
Sub-feeder
(e.g. Bates cable)
The set
Lunch box
Lunch box
Lunch box
Y-Connectors
Main Distro Box
is 100 amps, what is wrong with it all being on the same leg—after all, Figure 20.22. The “ring of fre” around
the fuses aren’t going to blow? There are several problems. First, we have the outside of the set is a standard
method of arranging electrical distribu-
to remember that electricity travels in two directions. Remember Fudd’s tion on a set It ensures that power will
Law, “What goes in, must come out.” To put it more mathematically, the be available on all sides of the scene
without the need for cables to cross the
current on the neutral is equal to the sum of the diference between the load set and be in the way of dolly moves,
on the hot legs. In other words, on a three-phase system, if there are 100 actors’ movement, or otherwise be in
amps on each leg, the current on the neutral will be zero, the phases cancel the way of the shot It is shown here in
a simplifed distro confguration A ring
each other out. On the other hand, if the same system has 100 amps on one of fre is when the feeder makes a com-
leg and nothing on the other legs, the current on the neutral will be 100 plete circle around set and connects
back into itself This makes it possible to
amps. This applies to linear loads, such as tungsten lights. load the cable more than normal Many
Gafer Guy Holt explains it this way: “Here is a much simplifed explana- technicians prefer to connect the feeder
back to itself to form a real ring
tion of power factor and why it is necessary in HMI & Kino ballasts. With
a purely resistive AC load (Incandescent Lamps, Heaters, etc.) voltage and
current waveforms are in step (or in phase), changing polarity at the same
instant in each cycle (a high power factor or unity). With ‘non-linear loads’
(magnetic and electronic HMI & Fluorescent ballasts) energy storage in the
loads impedes the fow of current and results in a time diference between
the current and voltage waveforms—they are out of phase (a low power
factor). In other words, during each cycle of the AC voltage, extra energy,
in addition to any energy consumed in the load, is temporarily stored in
the load in electric or magnetic felds, and then returned to the power dis-
tribution a fraction of a second later in the cycle. The ‘ebb and fow’ of this
nonproductive power increases the current in the line. Thus, a load with
a low power factor will use higher currents to transfer a given quantity of
real power than a load with a high power factor.”
This is dangerous for several reasons. First of all, we run the danger of
overloading the neutral wire. Keep in mind that a neutral is never fused,
so overloading it can have dire consequences. Secondly, even if our system
power.&.distro..447.
All lights (except Maxi-Brutes) must be highly mobile.
All lights (except 9-Lite FAY) to have 1/4 or 1/2 CTS) and be on ˜icker box.
Maxi-Brutes to have 1/2 CTO.
We might have voltage drop issues. Two Maxi-Brutes
with 1/2 CTB Generator
9-Lite FAY
with 1/2 or 1/4 CTS
#00 feeder
Lunch box
600 amp
le
ab
pass thru
0 ’) r c
50 e
distro box
x. ed
100 Amp Bates
ro fe
(three lengths)
pp ed
(a and
B
300 Amp Slope of the terrain
Banded feeder cable distro box
(approx. 300’)
Lunch box
2Ks with
˜icker box 60 Amp Bates and Single extensions
and 1/4 CTS
Figure 20.23. This is the distribution can handle an overcurrent in the neutral, the transformer in the electric
diagram for the Medieval knights flm room or on the pole outside might fail as a result of the imbalance. In the
shown in the chapter Lighting Enough
information so that the gafer can make realm of “things that will ensure you never get hired again by this com-
the fnal calculations on generator size, pany,” melting down the transformer is near the top of the list.
ampacity of distro cable, number of
distro boxes, connectors, and adapt- The moral of the story is: balance your load. As you plug in or switch on,
ers This multi-day shoot would have distribute the usage across all legs. Once everything is up and running, the
scenes shot in diferent parts of the
larger area—advance cabling for future load is checked with a clamp-on ammeter. Generically called an amprobe
scenes was a time-saver (after a major manufacturer of the devices), this type of meter is one of the
most fundamental tools of the flm electrician. A good trick is to always
read a load in the same order, for example red, yellow, blue (RYB: “rib”).
That way you only need to memorize a set of three numbers. This will
help you remember which legs are high or low. In some cases, it may be
necessary to run a ghost load. For example, you have a 100 amp generator
but are running only one light, a 10K. Since there is only one light, the
load would be entirely on one leg, deadly to a small genny. Plug unused
lights into the other legs and point them away so that they don’t interfere
with the set lights. Don’t set them face down on the ground or cover them.
This could cause them to overheat and create a fre hazard or blow the
bulb.
In a tie-in situation don’t forget that there may be other uses in the build-
ing. Check the load at the tie-in and on the building service itself. Even
though your lights are in balance, there may be other heavy use in the
building that is overloading the system or throwing it out of balance. This
can be tricky—the worst ofenders are, in fact, intermittent users: freez-
ers, air conditioners, etc., which might be of when you scout the location
and check the load, but kick in just as you are pulling your heaviest load.
448..cinematography:.theory.and.practice.
Three-Phase Electrical Supply Single-Phase Electrical Supply
Hot Legs Hot Legs
Yellow
Red (Black) Blue Neutral Ground Red Blue Neutral Ground
83 amps 0 amps
on the on
neutral! neutral
No light No light
connected connected
#1 #2 #3 #4
#6 #8 #10 #12
• Keep your load balanced.
• With large lights (5K and above in tungsten and some HMIs) the
bulbs should be removed for transportation.
• Hot lenses can shatter when water hits them. If rain or rain efects
are a problem, provide rain hoods or other rain protection.
• Warn people to look away when switching on a light by shouting
“striking.”
• Don’t assume it’s alternating current (AC). DC can be deadly to
AC equipment. Check it.
WET WORK
Because moisture reduces the resistance in a ground fault circuit, thereby
increasing shock current, working in or near water is probably the most
hazardous situation a crew can face. Extra precautions and constant vig-
ilance are required. Ground fault interrupters (GFI) are highly recom-
mended (Figures 20.27 and 20.34). Saltwater soaked beaches are particu-
larly hazardous. Some things to keep in mind:
• Keep all stage boxes and connectors dry by wrapping them in
plastic and keep them on half apple boxes. Remember that gafer’s
tape is not resistant to water or cold; plastic electrical tape will
work. Garbage bags are useful, but visqueen or heavy plastic is
better.
• Ground everything in sight.
• Keep HMI ballasts out of water and of of damp sand.
• Place dry rubber mats liberally, wherever someone might be
working with hot items such as HMI ballasts, distribution boxes,
etc.
HMI SAFETY
HMIs can be extremely hazardous. The startup voltage on an HMI can be
13,000 volts or more.
• Never operate the HMI with the lens of. The lens flters harmful
ultraviolet radiation.
• Never attempt to defeat the micro-switch, which prevents the
unit from fring if the lens is open.
GROUNDING SAFETY
The concept of safety grounds is simple. We always want the electricity to
have a path that is easier to follow than through us. Most building codes
require that the neutral and ground be bonded together at the box and that
power.&.distro..451.
Figure 20.27. (right, top) A pass-through
lunch box It has a 100 amp Bates input
on one side and a 100 amp Bates output
on the other side It also has Shock Stop
Ground Fault Circuit Interrupter in line
with the 100 amp Bates input The ply-
wood X is a clever design for keeping
the equipment of of wet ground to
reduce potential problems When sepa-
rated, the two halves for several of them
ft neatly into a milk crate for storage
on the truck (Photo courtesy Guy Holt,
ScreenLight & Grip)
Figure 20.28. (right, bottom) A gang
box with 100 amp Bates connector
Figure 20.29. (above, top) A Snake
Bite—CamLok to Bates
Figure 20.30. (above, bottom) A
ground squid with Cam-Lok connectors;
used when you have several lines to
ground but only one ground outlet on
the distro box
452..cinematography:.theory.and.practice.
a direct ground run from the box to the earth. This ground is then con-
tinued by the continuous metal contact of conduit and box to the metal
housing of all devices.
Cold water pipes are a reliable ground as they have a continuous metal-
to-metal circuit connected to the supply pipes that are buried in earth.
Hot water pipes are less likely because the metal-to-metal contact may be Figure 20.31. (above, top) 18/2 zip cord
broken at the hot water heater. Sprinkler system pipes are grounded but it for lamps and other small devices It has
is considered unsafe (and in most cases illegal) to ground to them. Quick-On connectors attached
• Always check for adequate ground with a voltage meter. Read the Figure 20.32. (above) A male Quick-On
potential voltage between the ground and a hot source. connector for zip cord
• Remember that paint is an insulator. You will not get a good Figure 20.33. (left) A four way with
ground if you don’t scrape down to bare metal. Edison (Hubbell) connectors Four ways
may be short like this or 10, 15, or 25
• A ground is not truly safe if it is not capable of carrying the entire feet long
load. When in doubt, ground it.
GFCI
A ground fault circuit interrupter (GFCI) can help prevent electrocution.
GFCIs are generally installed where electrical circuits or those using
them may accidentally come into contact with water but may be useful
in many other situations where the equipment is not grounded in other
ways. A ground fault is a conducting connection between any electric con-
ductor and any conducting material that is grounded or that may become
grounded. GFCIs are not just for wet work anymore. Starting with the
2017 Code, all outdoor receptacles (both portable and fxed) of 150V or
less to ground, 50A or less, must be GFCI protected. This includes power
to Video Village, Crafty, Catering, Basecamp, and small cord set lighting.
Shock Stop and Lifeguard are two examples of portable GFCI protection
(Figures 20.27 and 20.34).
HOW DOES A GFCI WORK?
Guy Holt writes “GFCIs trip on an ‘inverse time’ curve. An inverse time
curve introduces a delay that decreases as the magnitude of the current
increases. The advantage to an inverse time trip curve is that it permits a
transient imbalance that is sufciently short in duration so as not to pose a
danger to pass while keeping current through the body to safe levels.
The advantage of [the inverse] trip curve is that it minimizes nuisance
tripping from surges in residual current while providing protection from
shocks. ‘Film’ GFCIs, like the Shock Stops and LifeGuards, use sophisticated
(read expensive) micro-processors to trip more closely to the inverse time
curve and so are more forgiving of transient surges caused by switching on
other lights, which greatly reduces nuisance tripping.
It is probably worth noting that non-linear loads (HMIs, Kinos, & LEDs
—the predominant loads these days) will by design leak a small amount of
harmonic current to the equipment grounding conductor called ‘residual
power.&.distro..453.
Figure 20.34. A GFCI inline right from
the generator (Photo courtesy Guy Holt,
ScreenLight & Grip)
454..cinematography:.theory.and.practice.
21
technical issues
SHOOTING GREENSCREEN/BLUESCREEN
Chroma key, known as bluescreen, greenscreen, or process photography, is a
method of producing mattes for compositing. The basic principle is the same
for all processes and for both flm and video: by including an area of a pure
color in the scene, that color can then be made transparent, and another
image can be substituted for it.
The people or objects you are shooting is called the foreground plate, and
what you will eventually put in to replace the green or blue screen is called
the background plate. Green and blue are the most common colors used, but
in theory, any color can be used. There is one fundamental principle that
must be remembered: whatever color you are using to become transparent
will be replaced for the entire scene. Thus, if the background is green and
the actor has a bright green shirt, his torso will become transparent.
If there is any camera movement in a matte shot, you should always
include tracking marks (Figure 21.12). These can be as simple as crosses of
tape on the background as a clue to the movement that will be required of
the background element that is to be laid in.
Another important practice is to shoot a reference frame (screen cor-
rection plate). This is a shot of the same green or bluescreen background
without the foreground element—just the screen itself, in case there is any
problem with the matte. Other recommendations:
• Use the lowest-grain flm possible (low ISO) or native ISO on an
HD/UHD camera. Grain and video noise can make compositing
more difcult and also it may be difcult to match the grain/noise
level between foreground and background.
• In video use the highest resolution format possible.
• Check that the codec and fle format you are using is appropriate
for chromakey work.
• Do not use difusion over the lens. Avoid heavy smoke efects.
Figure 21.1. (top) Typical bluescreen Keying software can deal with some smoke efects, but there are
(or greenscreen) lighting setup with limits.
Kino Flos (Photo courtesy of Kino Flo)
• Always shoot a grayscale lit with neutral light (3200K or 5500K)
Figure 21.2. (above) Spacing is impor- at the beginning of each roll when shooting on flm.
tant for even illumination of the cyc,
which is critical for a good process shot • Try to avoid shooting with the lens wide open. The reason for
(Photo courtesy of Kino Flo) this is that many camera lenses vignette slightly when wide open,
which can create problems with the matte.
• In video, never compensate for low light levels by boosting the
gain, which will increase noise.
• To match the depth-of-feld of the foreground, shoot the back-
ground plate with the focus set at where it would be if the fore-
ground object was actually there.
• The perspective of foreground and background plates must
match. Use the same camera, or one with the same sensor size, the
same lens, camera height, and angle of tilt for both.
• Plan the lighting, screen direction, and perspective of the back-
ground plates; they must match the foreground plate.
LIGHTING FOR GREENSCREEN/BLUESCREEN
Most important is that the lighting be even across the background screen.
Optimum exposure levels for the green or blue screen depend on the
nature of the subject and the setup. In general, you want the exposure
of the screen to be about the same as the foreground. There is no general
agreement on this. Some people set them to be the same; some people
underexpose the background by up to one stop, and some people light
the background by as much as one stop hotter than the foreground. The
bottom line is simple: ask the person who will be doing the fnal compos-
ite—the compositor or efects supervisor. Diferent visual efects houses
will have varying preferences that may be based on the hardware/software
combination they use. Always consult with your efects people before
456..cinematography:.theory.and.practice.
shooting. This is the golden rule of shooting any type of efects: always Figure 21.3. A set built to exactly
talk to the postproduction people who will be dealing with the footage: re-create a subway station in order
to accommodate the green screen,
ultimately they are the ones who are going to have to deal with any prob- which wouldn’t be possible in a work-
lems. ing subway station (Photo courtesy
Michael Gallart)
Lighting the background can be done in many ways using tungsten units,
HMIs, or even daylight. Kino Flo makes special bulbs for lighting back-
grounds; they are available in both green and blue. Figures 21.1 and 21.2
show Kino Flo’s recommendations for using their units to light a back-
ground.
Nothing will undermine the believability of a composite more than a
mismatch of lighting in the foreground and background plate. Attention
must be made to recreating the look, direction, and quality of the lighting
in the background plate. When shooting greenscreen/bluescreen:
• Keep the actors as far away from the background as you can to
guard against backsplash, 12 to 15 feet if possible.
• Light the background as evenly as possible: within 1/3 stop varia-
tion on any part of the screen is ideal.
• Don’t include the matte color in the scene; for example, when
shooting greenscreen, don’t have green props or anybody wearing
green wardrobe.
• Use the waveform monitor or a spot meter to read the green-
screen; use an incident meter to read the subject.
• In general, greenscreen is used for HD video and bluescreen is
used for flm. This is based on diferences in how flm and video
react to colors with the least amount of noise.
The reason you use an incident meter to read the subject and a spot meter
(refectance meter) for the background is that greenscreen/bluescreen
materials vary in their refectivity. We are not concerned with how much
light is hitting the background, only how much light it is refecting back
toward the camera. The best exposure indicator for exposing your green
or blue screen is the waveform monitor, you are looking for a thin fat line
for the background, the vectorscope will show you the saturation. Ask the
EFX house what IRE they want the screen value to be but it will generally
be between 40 and 55 IRE.
For the subjects (actors or whatever) being photographed, we read them
as we normally would, with an incident meter. Using a spot meter on an
actor can be tricky. What part of them do you read? Their forehead? Their
shirt? Their hair? With a refectance meter those are all likely to be difer-
ent readings. Which one of them represents the actual exposure, the f/
stop the lens should be set at? Yes, you can do it, if you have a good under-
technical.issues..457.
Figure 21.4. (top) A blue screen shoot standing of the Zone system, as discussed in the chapter Exposure. It is
for a time-slicing shot possible, but not usually necessary; an incident meter gives us an excellent
reading when used properly as previously discussed: holding the meter
so that it is receiving the same lighting as the subject, receptor aimed at
the camera, and a hand shielding the meter from backlight, kickers or any
other stray light that might not be relevant to the subject exposure. In
video, the waveform monitor is useful in judging exposure and balance; a
vectorscope can reveal any color problems.
DIMMERS
There are a number of ways we control the intensity of a light’s output at
The golden rule of shooting any the unit itself:
type of efects: always talk to the
postproduction people They are • Flood-spot.
the ones who are going to have to
deal with the footage • Wire scrims.
• Grip nets.
• Difusion.
• Neutral density flters.
• Aim (spilling the beam).
• Switching bulbs on and of in multi-bulb units (multi-PARs or
soft lights).
The alternative is to control the power input into the light with dimmers.
There are advantages and disadvantages.
The advantages are:
• Fine degree of control.
• Ability to control inaccessible units.
• Ability to quickly look at diferent combinations.
• Ability to do cues.
• Ability to preset scene combinations.
• Ability to save energy and heat buildup by cooling down between
takes.
458..cinematography:.theory.and.practice.
Figures 21.5 through 21.7. A green-
screen lighting setup and the result-
ing fnal composite Since there are no
tracking marks, we can assume that the
camera was static for this shot (Cour-
tesy China Film Group)
technical.issues..467.
Figure 21.20. Day-for-night on Mank • Raise all connectors, especially distribution boxes, of the ground
Shot in broad daylight and deliberately on apple boxes. Wrap them in plastic and seal with tape, electrical
underexposed, big lights were needed
to pick up the exposure on the faces tape as gafer tape won’t withstand water.
Since it would be unnatural for them to
squint at night, the actors were ftted • Ground everything you can.
with ND flter contact lenses—eyeball • Put rain hats on all lights. Protect the lenses of all large lights;
sunglasses
water on a hot lens will shatter it with the possibility of glass
fying out.
• Cover equipment racks and other spare equipment with heavy
plastic.
• Crew members should wear insulating shoes and stand on rubber
mats whenever working with electrical equipment.
• Observe all electrical safety rules religiously.
Most rain conditions (which includes real rain as well as rain towers) call
for a camera umbrella, which is a large sturdy beach or patio-type umbrella
and perhaps an aluminized space blanket or rain cover for the camera.
Many cameras have purpose-built rain covers. Be sure that the flters are
protected as well; rain drops on the flter or lens are very noticeable. For
heavier water conditions, a defector may be necessary. A rain defector is a
spinning round glass in front of the mirror. It rotates fast enough to spin
the water of and keep the lens area clear. One caution: when used with a
free-foating camera rig (Steadicam, etc.), the spinning glass acts as a gyro
and tends to pull the camera of course. There are other devices that blow
either compressed air or nitrogen toward a clear flter to keep water of.
LIGHTNING
Because lightning must be extremely powerful to be efective, it generally
calls for a specially built rig. The use of machines from Lightning Strikes
units are widely used. They are basically incredibly powerful strobes.
Included with them is a controller that can vary the timing and intensity
of strikes to very accurately reproduce actual lightning. For further real-
ism, several units should be used. Except when a storm is far away, light-
ning comes from several diferent angles. Another time-honored method
is to use metal shutters on the lights. Shutter units are available for a wide
variety of lights and they ft into the slots for the barn doors of the unit.
468..cinematography:.theory.and.practice.
GUNSHOTS AND EXPLOSIONS Figure 21.21. Rain can never be efec-
tive unless it is backlit, as in this shot
Gunshots are fashes of short enough duration that they might occur while from John Wick
the shutter is closed. When shooting flm, the standard procedure is for
the operator to watch for the fashes. If the operator sees them, then they
did not get recorded on flm. If the operator saw them, it means the fashes
occurred while the mirror was refecting the image to the viewfnder.
Depending on how critical they are, another take may be necessary to
make sure all the shots are recorded. Several things can be done to alleviate
this problem. There are prop guns that do not use gunpowder but instead
use an electrical pulse coupled with a chemical charge to produce a fash.
Guns should only be handled by a licensed pyrotechnician/armorer;
in most places this is a legal requirement. The same applies to bullet hits
(squibs) planted on people, props, or the set. Squibs are small black powder
charges and can be dangerous. If the gun is not fring in the shot, the
armorer should open each gun, verify that it is empty, then show it to
the actors and camera crew with the action open. If fring of blanks is
anywhere near toward the camera, a shield should be set by the grips to
cover the camera, the operator, and the focus puller. This is usually done
with clear polycarbonate (Lexan is a popular type) that is optically smooth
enough to shoot through but strong enough to protect the people, the
lens, and the camera. This shield needs to be secured and bagged so that it
won’t get knocked over by an errant diving stunt person or a chair that gets
kicked in the action. The disadvantage of Lexan is that it scratches easily.
The same type of shield is necessary for small explosions, rockets, shat-
tering glass, and so on. For small explosions, the camera crew also need
to be protected from objects that get blown into the air. For larger explo-
sions, the camera should be either locked down and switched on remotely,
or operated with a remote-control head and video assist. In this case, either
very heavy duty crash boxes are needed or expendable cameras. Explo-
sions are usually flmed with multiple cameras at various frame rates; at
least one or more of the cameras will be run a high frame rate—often up
to 250 FPS or more.
TIME-LAPSE PHOTOGRAPHY
Time-lapse is usually done with an intervalometer—an electronic device that
controls the timing and duration of each exposure. The Norris Interval-
ometer starts at an exposure of 1/16 of a second and gets longer from
there. There are also computer and smartphone apps that will drive a con-
nected camera and function as an intervalometer. The interval between
exposures can be anywhere from a fraction of a second up to several hours
or even days apart.
technical.issues..469.
Figure 21.22. A Phantom Flex4K 3D rig With longer exposure you get not only a time-lapse efect but may also
(Photo courtesy of LoveHighSpeed) get blurring of the subject. This can be strongly visual with subjects such
as car lights or moving clouds or a rushing stream. One issue with time-
lapse shots is that the exposure may change radically during the shot, espe-
cially if it is night-into-day or day-into-night, or if heavy clouds move in
during the shot. This can be controlled with a timing device, or it may be
necessary to stand by and do exposure changes manually. Also, with long
intervals between exposures, it is possible for enough light to leak around
the normal camera shutter to fog frames. An additional shutter, known as
a capping shutter, is added to prevent this. The Steele Chart by Lance Steele
Rieck (Table 21.2) shows screen time versus event duration for time-lapse.
TIME SLICING
This is the efect that was made famous in The Matrix, where a character
is suddenly frozen but the camera dollies around the fgure. This efect is
accomplished with an array of still cameras arranged around the subject.
A regular flm or video camera can be part of the array. At the moment of
freezing the action, the entire circle of cameras is fred. These stills are then
scanned and blended together to form a flm shot (Figure 21.4). Visualize
it this way: imagine a camera on a mount that can be dollied around the
subject instantaneously, let’s say 90°, with the flm running at very high
speed. Since the dolly is instant, it “sees” the subject from all points around
470..cinematography:.theory.and.practice.
that arc before the subject can move. This is what the still cameras do: Figure 21.23. Director/cameraman Ben
they see the subject from as many points of view as you wish—all at the Dolphin shoots a scene in the water
with green screen Obviously a water-
same time. In practice, the subject is often placed on greenscreen and then tight housing is necessary for scenes
the green background is replaced with live-action footage of the original like this (Photo courtesy Ben Dolphin)
scene, usually matching the dolly action simulated by the still array. The
result is a dolly around a live-action scene with a frozen subject in the
middle.
TRANSFERRING FILM TO VIDEO
When transferring 24 FPS flm to video at 29.97 FPS, there is a mismatch
of speed that must be corrected. Interlaced video has 60 felds/second (2
felds per frame), so fve felds take 5/60 second = 1/12 second time, which
is exactly the amount it takes for a flm to show two frames. The solution
to the problem is that each frame of flm is not transferred to a correspond-
ing frame of video.
The frst flm frame is transferred to 3 video felds. Next, the second flm
frame is transferred to 2 video felds. The total time for the original flm
should be 2/24s = 1/12s, which is exactly the same time it took for NTSC
to transfer the same frames (3/30s + 2/60s = 5/60s = 1/12s). This process
alternates for successive frames. This is called 3-to-2 pulldown, usually
written as 3:2 pulldown. The problem with this is that every other flm
frame is shown for 1/20 of a second, while every other is shown for 1/30
of a second. This makes pans look less smooth than what they did in the
movie theater. Film shot at 25 FPS does not require this process.
FLICKER
As discussed in the chapter Lighting Sources, there are three basic kinds of
light sources. One is a flament (usually tungsten) that is heated by elec-
trical current until it glows. The other type is a discharge source. These
include fuorescents, HMIs, Xenons, mercury vapor, sodium vapor, and
others (LEDs are the third). In all of these, an arc is established between a
cathode and an anode. This arc then excites gases or a plasma cloud, induc-
ing them to glow. All discharge sources run on alternating current. Any
arc-based bulb powered by alternating current has an output that rises and
technical.issues..471.
ALEXA ProRes 2K Framing Chart
PLEASE FOCUS
2.39 flat
1.85 DCP
1.78
2.39 2x anamorphic
Figure 21.24. This framing chart from falls as the waveform varies. Alternating current rises and falls as it heats a
Arri includes both frequently used film tungsten filament as well, but the filament stays hot enough that the light
and video formats, such as 1 .78:1(16:9) .
(Courtesy Arri Group) . it emits does not vary a great deal. There is some loss of output, but it is
minimal, usually only about 10 to 15%, not enough to affect exposure.
With discharge sources, the light output does rise and fall significantly
throughout the AC cycle.
Rarely perceptible to the eye, flicker appears on the footage as an uneven
variation in exposure. This effect is a result of variations in exposure from
frame to frame as a result of a mismatch in the output wave form of the
light and the frame rate of the camera. Flicker can be bad enough to com-
pletely ruin the shot. AC power is a sine wave. When the current flow is
at the maximum or minimum the output of the light will be maximum.
When the sine wave crosses the axis, the current flow drops to zero and
the bulb produces less output. Since the light is “on” for both the positive
and negative side of the sine wave, it reaches its maximum at twice the rate
of the AC: 120 cycles per second for 60-hertz current and 100 cycles per
second for 50-hertz current. For an HMI with a magnetic ballast, the output
at the crossover point may be as low as 17% of total output.
With film there is another complication: the shutter is opening and clos-
ing at a rate that may be different than the rate at which the light output is
varying. When the relationship of the shutter and the light output varies
in relation to each other, each film frame is exposed to different amounts
of the cycle. The result is exposure that varies enough to be noticeable.
There are three possibilities when shooting film: the frame rate of the
camera can be unsteady, the frequency of the electrical supply can fluctu-
ate, or the frame rate of the shutter creates a mismatch in the synchroniza-
tion of the shutter and the light output. The first two are obvious: if either
472 cinematography: theory and practice
the shutter rate or the light output are random, it is clear that there will Figure 21.25. Four 12K Blackmagic
be diferent amounts of exposure for each frame. The third is a bit more cameras rigged on a pickup to do back-
ground plates for CGI (Photo courtesy
complex. Only certain combinations of shutter speed and power supply Sam Nicholson, Stargate Studios)
frequency can be considered acceptably safe. Deviations from these com-
binations always risk noticeable ficker. Four conditions are essential to
prevent HMI or fuorescent ficker:
• Constant frequency in the AC power supply.
• Constant framing rate in the camera.
• Compatible shutter angle.
• Compatible frame rate.
The frst two conditions are satisfed with either crystal controls on the
generator and camera or by running one or both of them from the local
AC mains, which are usually very reliable in frequency.
The shutter angle and frame rate are determined by consulting the appro-
priate charts. A ffth condition—relationship of AC frequency to shut-
ter—is generally only crucial in high-speed cinematography and is usually
not a factor in most flming situations.
At 24 FPS camera speed, if the power supply is stable, shutter angle can
vary from 90° to 200° with little risk. The ideal shutter angle is 144°, since
this results in an exposure time of 1/60th of a second and so it matches the
frequency of the mains power supply. In actual practice, there is little risk
in using a 180° shutter if the camera is crystal controlled and the power
supply is from the mains or a crystal-controlled generator.
With a 180° shutter opening, the camera is exposing 2-1/2 pulses per
frame (rather than the exactly 2 pulses per frame as you would get with
144°) and so exposure can theoretically vary by as much as 9%. In other
countries (especially in Europe), with a 50 cycle per second power supply,
and shooting at 24 FPS, the ideal shutter angle is 172.8°. Tables 21.3, 21.4,
and 21.5 list the acceptably safe frame rates for shooting at any shutter
angle (with either 50 or 60-hertz electrical supplies) and the safe frame
rates for specifc shutter speeds.
technical.issues..473.
Table 21.2. The Steele Chart for calculat- A simple way to think about it is to divide 120 by a whole number—for
ing time-lapse shots (Courtesy of Lance example, 120/4 = 30, 120/8 = 15. For 50 hertz (Hz) power systems, divide
Steele Rieck)
100 by a whole number. This results in a simplifed series as shown in
Table 21.5. Any variation in the frequency of the power supply will result
in an exposure fuctuation of approximately .4 f/stop.
Any generator used must be a crystal controlled. A frequency meter should
be used to monitor the generator. For most purposes, plus or minus one-
quarter of a cycle is considered acceptable. Flicker-free ballasts are avail-
able that minimize the possibility of ficker even under high-speed condi-
tions. They utilize two basic principles: square-wave output and high fre-
quency. Flicker-free ballasts modify the wave form of the power supply by
squaring it so that instead of the normal rounded sine wave, the output is
angular. This means that the rising/falling sections of the wave are a much
smaller portion of the total. As a result, the light output is of for less time.
Flicker-free ballasts also use increased frequency. The idea is that with
200 or 250 cycles per second it is less likely that there will be a mismatch
from frame to frame. Since there is an increase in the noise from in ficker-
474..cinematography:.theory.and.practice.
24FPS/50HZ POWER - SAFE FRAME RATES AT ANY SPEED 24 FPS 25 FPS
1.000˜ ˜ 4.000˜ ˜ 6.315˜ ˜ 10.000˜˜ 24.000 120 100
1.500˜ ˜ 4.800˜ ˜ 6.666˜ ˜ 10.909˜˜ 30.000 60 50
1.875˜ ˜ 5.000˜ ˜ 7.058˜ ˜ 12.000˜˜ 40.000 40 25
2.000˜ ˜ 5.217˜ ˜ 7.500˜ ˜ 13.333˜˜ 60.000
30 20
2.500˜ ˜ 5.454˜ ˜ 8.000˜ ˜ 15.000˜˜ 120.00
3.000˜ ˜ 5.714˜ ˜ 8.571˜ ˜ 17.143
24 10
3.750˜ ˜ 6.000˜ ˜ 9.231˜ ˜ 20.000 20 5
15 4
12 2
25FPS/50HZ POWER - SAFE FRAME RATES AT ANY SPEED
10 1
1.000˜ ˜ 4.166˜ ˜ 6.250˜ ˜ 11.111˜˜ 50.000
8
1.250˜ ˜ 4.347˜ ˜ 6.666˜ ˜ 12.500˜˜ 33.333
2.000˜ ˜ 4.545˜ ˜ 7.142˜ ˜ 14.285˜˜ 100.00 6
2.500˜ ˜ 4.761˜ ˜ 7.692˜ ˜ 16.666 5
3.125˜ ˜ 5.000˜ ˜ 8.333˜ ˜ 20.000 4
3.333˜ ˜ 5.263˜ ˜ 9.090˜ ˜ 25.000 2
4.00˜ ˜ 5.882˜ ˜ 10.00˜ ˜ 33.333 1
free mode, some ficker-free units can be switched from normal to ficker- Table 21.3. (above) Simplifed safe
frame rates with 24 Hz power supply
free operation. With high-speed shooting, ficker can also sometimes be and 25 Hz power supply
a problem with small tungsten bulbs, especially if they are on camera
because the smaller flaments don’t have the mass to stay heated through Table 21.4. (left, top) Safe frame rates
for any shutter speed with 24 Hz power
the cycle as do larger bulbs. supply
SHOOTING VIRTUAL REALITY Table 21.5. (left, below) Safe frame
rates for any shutter speed with 25 Hz
Virtual reality (VR) shooting is usually done with multiple cameras radiat- power supply
ing from the same central point. This allows post to knit the many views
together into one seamless image. A typical VR rig is shown in Figures
21.26 and 21.28.
In most situations, 360° cameras are used to flm VR scenes. Of course,
no one camera can shoot 360, so virtual reality rigs are made of four or
more cameras. One challenge in shooting with these cameras is that, ex-
cept for the most expensive ones, you are not able to view what is being
shot while camera is up. One form of insurance is to shoot lots of footage.
Yes, VR fles can be very large since they are from multiple cameras, but as
we have discussed before, reshoots are expensive and often not possible at
all, so play it safe and shoot even more than you think you need.
As with all scenes, review your footage before you leave the set or loca-
tion. Have your DIT or data manager scrub through the footage to check
for any problems. The director and DP should review the scenes to make
sure they have accomplished what they were aiming for. Try to view the
footage on the best monitor possible as small, lower resolution monitors
may hide many problems that will be glaringly obvious in the screening
room. For the same reason, try to shoot as much test footage as you can so
your camera crew is confdent in all aspects of operation.
Since you are shooting in all directions and usually a pretty tall shot ver-
tically, the chances to “frame” a shot are minimal. This is limiting in terms
of visual storytelling, so you’ll have to rely on other means. Careful selec-
tion of locations is a key element. Also, except in rare cases, only the actors
can be on the set as everyone will be in view. When the gafer and key grip
are scouting locations, they need to look for hiding spots that are not too
far from the set.
Lighting VR is also a challenge. Many VR cameras are not well suited to
low light conditions. Also, as with people on the set, any lights on stands
on the foor will be visible, as will any grip equipment. This limits your
lighting choices and this will be critical in choosing locations. Shooting on
technical.issues..475.
a set is less problematic as you can set lights in the overhead grid or bring
Figure 21.26. (top) Head mounted VR
them over the top of the set walls. You’ll need to take the feld of view
rig on the flm Agent Emerson (Photo by of the VR camera into account when designing your set. Walls fats are
Billy Bennight) generally 10’ tall, in some cases only 8’ and for some shots, this might not
Figure 21.27. (right) The greenscreen be enough.
set lit by spacelights Note the tracking Since the VR experience is greatly enhanced by camera movement,
marks (Photo by Billy Bennight)
mounting the camera on a stationary tripod will not often be satisfactory.
Figure 21.28. (above) This virtual reality Of course, a dolly might intrude into the shot and certainly a dolly grip
rig by Radiant Images employs 16 cam-
eras to capture every angle will be visible, so this is not usually an option. Mounting VR cameras on
drones is a popular method for exactly these reasons. A head mounted rig
such as in Figures 21.26 and 21.27 are an excellent option, especially for
frst person POV shots. VR footage can also be stereoscopic, in which case
you will be handling some even larger fles.
476..cinematography:.theory.and.practice.
DEALING WITH AUDIO
You may be thinking “this is a book about cinematography, why talk
about audio?” A few years ago this would have been a legitimate question.
When shooting film, there would usually be only one interaction between
the DP and the audio team: “Morning. How ‘bout those Dodgers, huh?”
When shooting digital, there is much closer teamwork between the two
crews. First, the audio might be recorded onto the camera rather than a
separate device, and two, on smaller productions there might not be an
audio person at all on the set—in which case, it’s up to the camera people
to use microphones and lavs to record the audio.
douBle sysTem Vs. single sysTem sound
Most digital cameras can record audio; however, only the higher end cam-
eras will have professional audio inputs, which are XLR plugs in most
cases (Figure 21.30). We’ll talk about connectors in a bit, but for now it’s
important to know that XLR inputs are usually essential to getting the
best audio. If you are recording directly into the camera, this is called single
system sound.
All professional audio is recorded as double system sound, which means
there is a separate recorder. Frequently, we record the audio onto a sepa-
rate recorder but also feed the signal to the camera; this makes it much
easier to synchronize the audio to video in post-production. If recording
into the camera is your only option, then make sure you do the best you Figure 21.29. (top) Three types of
microphones . From top, a shotgun mic,
can with microphone choice, microphone placement, and keeping the set a short shotgun mic and a dynamic omni-
quiet. Whatever setup you use, test it before your shoot day! directional microphone .
Out to headphones
for the boom operator
Inputs
Wireless headphones Outputs Outputs
for the Director
Mixer Recorder
Mixed audio out
to recorder
Headphones for
the Sound Mixer
will need AC power for your speakers. If it’s a day exterior shoot, the gaf-
fer may have been planning on only using refectors to light the set and
not intending to provide AC power on the set with a generator or other
means, such as asking a local shop if they can plug in a line to run out the
door to your set.
SYNCING
If you are using double system sound (where the audio is recorded on a
device other than the camera), you have to sync (synchronize) the audio
and video. Since the invention of “talkies” in 1929, this was done with a
clapper slate (Figure 21.35). On flm and video, it’s easy to see the exact
frame where the sticks come together. The distinct clap sound it makes is
clear on the audio; then it’s usually up to the assistant editor to laboriously
go through each and every take, fnd the frame where the sticks come to-
gether, fnd the distinctive “clap” on the audio and match them up. It’s a
laborious and slow process. Plural Eyes is a popular software that does a
good job of syncing audio and video. Final Cut Pro X, DaVinci Resolve, and
Premiere Pro also have syncing capabilities; however all of these applica-
tions have some limitations, especially in more complex cases.
TIMECODE SLATE
A big improvement came with the timecode slate. So what is timecode? You
can see it in Figure 21.36—it’s hours:minutes:seconds:frames. The beauty of
it is this is that it means every frame of your video and audio has a unique
identifer. This has tremendous utility in post-production; for one thing it
makes syncing audio and video practically automatic. It also helps editors,
sound editors, composers, and special efects creators keep track of every-
thing throughout the post process.
Of course, timecode doesn’t come from nowhere—it has to be gener-
ated. Professional audio recorders are capable of creating timecode, but
they are very expensive. Separate timecode generators are also very expen-
sive—until recently. Afordable and easy to use timecode generators are
now available, notably from Tentacle Sync. Small, lightweight and easy to
use, they make timecode available to indie flmmakers. Let’s take a look at
using timecode on the set using the Tentacle Sync E as an example (Figures
21.39 and 21.40).
technical.issues..481.
Figure 21.39. (above) A Tentacle Sync E
jamming timecode to the Timecode In
BNC connector on a Zoom F4 recorder
After timecode is jammed, the Tentacle
can be disconnected and used on a
camera For recorders and cameras that
don’t have a dedicated timecode in con-
nector, you can record timecode to one
of the audio tracks, as in Figure 21 37
First of all, you need to set the unit to the timecode you want to use. Figure 21.40. (left) A Tascam DR60D
There are two ways of using timecode: free run and record run. Free run mounted under a DSLR A wireless lava-
lier receiver is plugged in to channel 2
timecode is running all the time, even when the camera is not rolling. and a Tentacle Sync E is velcroed to the
Most of the time, we use it as time-of-day timecode—meaning it matches top of the camera and feeds timecode
into channel 3 of the Tascam
the actual clock time of where you are shooting. Although it’s not neces-
sary, having the timecode match the time of day helps identify shots and
makes things easier to keep track of. In the case of the Tentacle, setting
the time is done by Bluetooth through their iPhone, iPad app, or Android
device. Since smartphones and tables get their time from the internet, it is
extremely accurate. The app also indicates the battery level of each device
and whether or not it is connected.
Once all of your Tentacles are synced to the same timecode, it’s time to
jam it to the audio recorder and all of the cameras (Figure 21.39). Jam-
ming just means transferring the timecode from the generator to the audio
device or camera. Generally, this is handled by the audio department, as
timecode originates either from their recorder or from a timecode gen-
erator that they supervise. Jamming is always done at the beginning of
the shoot day, and then again after lunch. The reason for this is that even
high-end pro equipment can drift over time, thus throwing sync of by
a few frames or even more. Each time, the audio person will jam sync
to the timecode slate (if there is one), and each of the cameras. The au-
dio recorder is the origin so it doesn’t need it. With pro equipment this
would involve connecting and disconnecting cables for each device. Since
Tentacle works with Bluetooth, the process is much simplifed; however,
the devices have to be fairly close together, but since they are so small and
light, this is not a problem.
Pro equipment has dedicated timecode inputs; however, even cameras
that cost up to seven or eight thousand dollars might not have them. Of
course DSLRs and other prosumer cameras that might be in use on indie
or student productions will certainly not have them. So what do we do? In
this case, the Tentacle comes with a short cable that has a 3.5mm TRS con-
nector at each end. DSLRs and many audio recorders have 3.5mm micro-
phone inputs. Connecting the generator to the mic input sends timecode
as an audio signal to one of the channels; you will see it as a constant level
on the audio display, you will also hear it as high-pitched noise on that
channel. Does this mean you have lost both audio channels on the camera?
No, the Tentacle has a built-in microphone that transmits location sound
to the other channel. Is it great audio? No, but it doesn’t need to be—it’s
what we call scratch track, which is just audio recorded to be used for sync or
other purposes later on. You should always record scratch track, no matter
what your setup.
482..cinematography:.theory.and.practice.
dedication
To my wife and inspiration—Ada Pullini Brown.
the author
Blain Brown is a cinematographer, writer, and director based in Los Angeles. He has been the director of photography, direc-
tor, producer, and screenwriter on features, commercials, documentaries, and music videos on all types of flm and digital
formats. He has taught at several flm schools in the Los Angeles area.
acknowledgments
Adam Wilt James Mathers, Digital Cinema Society
Airstar Lighting, Inc. Jon Fauer, Film and Digital Times
Ammar Quteineh Keslow Camera
Arri Group Kino Flo, Inc.
Art Adams Lance Steele Rieck
Backstage Equipment, Inc. Larry Engle
Barry Bassett VMI, vmi.tv Larry Mole Parker, Mole-Richardson
Bill Burke Lee Filters
Birns and Sawyer, Inc. Lightning Strikes, Inc.
Brady Lewis Mark Weingartner
CAME-TV, came-tv.com Matthews Studio Equipment Corp.
Canon, USA McIntire, Inc.
Century Precision Optics, Inc. Michael Gallart
Chapman/Leonard Studio Equipment Murray J. Cox, Visual Consultant
Chimera Lighting Panasonic, Inc.
Cinefade, Cinefade.com Panavision, Inc.
Color Grading Central, colorgradingcentral.com PAWS, Inc.
Datacolor Photo-Sonics, Inc.
Dave Corley at DSC Labs, DSCLabs.com Red Digital Cinema
Don Lamasone Rosco Laboratories, Inc.
Eastman Kodak, Inc. Schneider Optical, Inc.
Flying Cam, Inc. Sony Corp.
FotoKem Film and Video Stargate Studios, stargatestudios.net
Fuji Film, Ltd. Steve Shaw at Light Illusion, lightillusion.com
Geof Boyle and everyone at CML Stu McOmie
Greg Cotten, lattice.videovillage.co Sunray, Inc.
Guy Holt, ScreenLightAndGrip.com. Tektronix, Inc.
Ira Tifen and The Tifen Company Tony Nako
J.L. Fisher, Inc. X-Rite Photo and Video
cinematography:.theory.and.practice..483.
bibliography
Adams, Ansel. The Negative. Little, Brown & Co. 1983
Arnheim, Rudolf. Art and Visual Perception. University of California Press. 1954
Film as Art. University of California Press. 1957
Visual Thinking. University of California Press. 1969
The Power of the Center. University of California Press. 1982
ASC, Rod Ryan. American Cinematographer Manual. ASC Press. 2000
Barclay, Steven. The Motion Picture Image: From Film to Digital. Focal Press. 2000
Bellantoni, Patti. If It’s Purple, Someone’s Gonna Die: The Power of Color in Visual Storytelling. Focal Press. 2005
Bordwell, David and Kristin Thompson. Film Art: An Introduction. McGraw-Hill. 1997
Brown, Ada Pullini. Basic Color Theory. Unpublished ms. 2000
Brown, Blain. Filmmaker’s Pocket Reference. Focal Press. 1995
Motion Picture and Video Lighting. Focal Press. 3rd Edition, 2018
The Filmmakers’s Guide to Digital Imaging. Focal Press. 2014
The Basics of Filmmaking, Focal Press, 2021
Campbell, Russell. Photographic Theory for the Motion Picture Cameraman. A.S. Barnes & Co. 1974
Practical Motion Picture Photography. A.S. Barnes & Co. 1979
Carlson, Verne and Sylvia. Professional Lighting Handbook. Focal Press. 1985
Case, Dominic. Motion Picture Film Processing. Focal Press. 1990
Film Technology In Post Production. Focal Press. 1997
Cook, David. A History of Narrative Film. W.W. Norton & Co. 1982
Davis, Phil. Beyond The Zone System. Van Nostrand Reinhold Co. 1981
Dmytryk, Edward. On Screen Directing. Focal Press. 1984
Cinema: Concept and Practice. Focal Press. 1998
Eastman Kodak. Kodak Filters for Scientifc and Technical Uses (B-3). Eastman Kodak Co. 1981
Professional Motion Picture Films (H-1). Eastman Kodak Co. 1982
Ettedgui, Peter. Cinematography: Screencraft. Focal Press. 2000
Fauer, John. The Arri 35 Book. Arrifex Corp. 1989
Feldman, Edmund Burke. Thinking About Art. Prentice Hall. 1996
Fielding, Raymond. Special Efects Cinematography. 4th Edition. Focal Press. 1990
G.E. Lighting. Stage and Studio Lamp Catalog. General Electric. 1989
Grob, Bernard. Basic Television and Video Systems. McGraw-Hill. 1984
Happe, L. Bernard. Your Film and the Lab. Focal Press. 1989
Harrison, H.K. The Mystery of Filters. Harrison and Harrison. 1981
Harwig, Robert. Basic TV Technology. Focal Press. 1990
Hershey, Fritz Lynn. Optics and Focus For Camera Assistants. Focal Press. 1996
Higham, Charles. Hollywood Cameramen: Sources of Light. Garland Publishing. 1986
Hirschfeld, Gerald. Image Control. Focal Press. 1993
Hyypia, Jorma. The Complete Tifen Filter Manual. Amphoto. 1981
Jacobs, Lewis. The Emergence of Film Art. Hopkinson and Blake. 1969
Janson, H.W. The History of Art. 6th Edition. Harry Abrams. 2001
Jones, et. al. Film Into Video. Focal Press. 2000
Kawin, Bruce. Mindscreen: Bergman, Godard and First Person Film. Princeton University Press. 1978
Maltin, Leonard. The Art of The Cinematographer. Dover Publications. 1978
Mascelli, Joseph. The Five C’s Of Cinematography. Cine/Grafc Publications. 1956
McClain, Jerry. The Infuence of Stage Lighting on Early Cinema. International Photographer. 1986
Millerson, Gerald. Lighting for Television and Motion Pictures. Focal Press. 1983
Nelson, Thomas. Kubrick: Inside A Film Artist’s Maze. Indiana University Press. 1982
Perisic, Zoran. Visual Efects Cinematography. Focal Press. 2000
484..cinematography:.theory.and.practice.
Rabiger, Michael. Directing – Film Techniques and Aesthetics. 2nd Edition. Focal Press. 1997
Ray, Sidney. The Lens in Action. Focal Press. 1976
Applied Photographic Optics. Focal Press. 1988
Reisz, Karel and Gavin Millar. The Technique of Film Editing. 2nd Edition. Focal Press. 1983
Rogers, Pauline. More Contemporary Cinematographers on Their Art. Focal Press. 2000
Samuelson, David. Motion Picture Camera Data. Focal Press. 1979
Sharf, Stephen. The Elements of Cinema. Columbia University Press. 1982
Shipman, David. The Story of Cinema. St. Martin’s Press. 1984
St. John Marner, Terence. Directing Motion Pictures. A.S. Barnes. 1972
Sterling, Anna Kate. Cinematographers on the Art and Craft of Cinematography. Scarecrow Press. 1987
Stroebel, Leslie. Photographic Filters. Morgan & Morgan. 1974
Sylvania. Lighting Handbook. 8th Edition. GTE Products. 1989
Thompson, Roy. Grammar of the Shot. Focal Press. 1998
Trufaut, François. Hitchcock/Trufaut. Simon and Schuster. 1983
Walker, Alexander. Stanley Kubrick Directs. Harcourt Brace. 1969
Wilson, Anton. Cinema Workshop. ASC. 1983
cinematography:.theory.and.practice..485.
index
AbelCine resolution test chart 136
Academy Color Encoding System (ACES)
167, 402, 427, 430–431, 432; color space
197, 199, 432; terminology 432; workfow
430, 431
Page numbers in italics refer to fgures. Page numbers Academy of Motion Picture Arts and Sciences
in bold refer to tables. (AMPAS) 197, 199, 430–431
.75ND flter 235 accents, in lighting 279
1D LUT 228, 230 ACES Proxy/Log ACES 432
1K Fresnel lights (babies) 252, 252, 298 action axis 51, 51, 52
1.2K HMIs 249 action cut 91–92
1.8K HMI PAR 249 Adams, Ansel 130, 147, 155
2K Fresnel lights ( juniors) 252, 252, 291 Adams, Art 115, 151, 384, 413, 417, 459; on
dynamic range 174–175; on exposing to
2K receivers 336, 339, 340 the right 159; on light meters 157, 158;
3D LUT 228, 229, 230–231 on LUTs 158; on matrix controls 198; on
middle gray 159; on native ISO 114; One-
3-to-2 connector 436, 444 Shot 133–134; on RAW 100; on S-curve
3-to-2 pulldown 471 168; on sensor speed 140; on set surround-
ings 408, 411; on S-Log 179; on zebras 160
4K video 98, 100
Adobe 103
5K Fresnel lights 252
aerial perspective see atmospheric perspective
6K HMIs 248–249, 334
aerial shots 362
8K HMIs 248–249
Agent Emerson 476
8K UHD 98
Aguado, Ken 8
9 Lite FAY 292, 293
Alberti, Maryse 385
9½ Weeks 43, 73, 278
Alcott, John 236
10K Fresnel lights 252
ALE (Avid Log Exchange) fles 428
12/3 446
aliasing 106
12K HMIs 248, 331, 442
alligator clamps 444
18% gray 130, 132, 134, 149, 185–186
Almendros, Nestor 289
18K HMIs 246, 248
Alonzo, John 208
20% rule 59–60, 59
alpha channel 105, 117
20K Fresnel lights 249, 252
alternating current (AC) 434, 471–472
30° rule 59–60, 59
ambient light 269, 274, 276, 279
42 346
American Gangster 71
80 series cooling flters 232, 233, 238
American shot see cowboy shots
81 series warming flters 232, 238
American Society of Cinematographers see
84 Charlie Mopic 79, 79 ASC Color Decision List (CDL) system
85 series warming flters 232, 233, 238 ampacity 443, 444
90% gray card 130 amplitude (strength) 120
180° line see action axis amprobe 447
360° cameras 475 amps/amperes 434, 435, 442
1917 94, 94, 254 analog dimming 460
2001: A Space Odyssey 93, 93 analog-to-digital converter (ADC) 96, 97
486..cinematography:.theory.and.practice.
Anderson clamps 444 back focus 380
angle of view 366–367 background 39, 39, 41, 46, 285
answering shots 56, 57, 58, 59, 80, 82 background plate 456, 462
anti-aliasing flters see Optical Low Pass Filters backlight 266, 267, 269, 274; for rain efects 467,
(OLPF) 469; sun as 289
aperture 138–139, 141 back porch 358
aperture ring 368 baked in 97, 99, 194
Apocalypse Now 9, 73, 207 balance principle 20
apparent focus 368, 372 ballasts: ficker-free 247, 474; HMI 247–248; Kino
Flos 256
appleboxes 340, 342
ball head 353, 355
ARIB/SMPTE color bars 123, 125, 126, 130
balloon lights 258–259, 258, 305
armorers 469
banded feeder cables 440, 446
Arnold, Jillian 421
banding 173, 176
Arri 103, 172; Amira 347, 351; Arri 709 187; Arri-
Max 250; ArriRAW 103; framing chart 472; Log bar clamps 335
C 182–183, 188; Look File 172; M18 249, 251; Barfy 256
skypanels 244, 340
Barger Baglights 255
Arri Alexa 96, 98, 159, 183, 426; false colors 153,
153, 155; output options 172, 172 barndoors 301, 301, 304, 323
ASC Color Decision List (CDL) system 200, 229, Barry Lyndon 67, 67, 88, 90
232, 426–429, 427, 431; primary color correction Barton Fink 75
428; SOP and S 428–429
Bates connectors 439, 445, 446
aspect ratio 33–34, 35–36
Batman Begins 211
Assimilate Scratch 156, 225, 225
batteries 399
Assistant Chief Lighting Technician (second electric)
389 Bayer, Bryce 108
atmosphere inserts 75 Bayer flter 103, 108, 108, 109, 110, 111
atmospheric perspective 13, 22–23 beadboard 300, 303, 321
audience involvement 77–80 beauty, and long lenses 42
audio 477; Automatic Dialog Replacement 480; Bellantoni, Patti 276
clean dialog, need for 477–478; microphones bellows 378–379
478–479; overlapping dialog 478; production,
rules 479; recording, typical connections for Bergman, Ingmar 25
481; scratch track 480, 482; shooting to playback best boy grip 400
480–481; single system vs. double system sound
477; and video, syncing 481; XLR inputs and Better File Renamer, A 425
cables 479 Betweenie Fresnel light 253, 280, 286
Automatic Dialog Replacement (ADR) 480 Big Combo, The 18, 20
autotransformers 305, 460 Big Eye tenner 252
available light 270, 282, 284–285 big head close-up see tight close-up
Big Lebowski, The 69
babies (1K Fresnel lights) 252, 252, 298 Big Sleep, The 275
Baby Baby 252, 252 binning 245
baby juniors (Fresnel lights) 252 Birdman 264, 382
baby plates 335–336, 341 bit depth 117
back cross keys 278, 282, 285, 291 bit rate 106, 117
cinematography:.theory.and.practice..487.
bits-per-channel 117 brightness 120, 121, 122, 125, 143–144
bits-per-pixel 106 brightness range see dynamic range
bits per stop 174 broad lights 256–257
bits total 117 Brown, Garrett 344, 363
black-and-white flters 239–240 Brunelleschi, Filippo 22
black balance 129, 129, 194 bucks 360
black body locus 193, 196 bullet 321
Black Difusion FX flter 236 bulls-eye bubble level 32
Black Frost flters 234 bullswitches 435, 445
Blackmagic: cameras 103, 473; Design 135, 161– Burn After Reading 206
162; URSA Mini Pro 12K camera 111 busbar lugs 444
Black Panther 24, 26 butterfies 327–328
Black Pro-Mist flter 236, 242
®
expose to the right (ETTR) 159–160, 162, 175 flm: cameras 112, 114; look, vs. video look 118; vs.
theater 66–67
exposure: Arri Alexa false color 153; balance
within frame 144; Blackmagic Design advice on flm chain 223
161–162; compensation, in macrophotography flm gamma 169, 169
377–379; controlling 138–139; digital 146; ele- flm negative 101
ments of 140–141; false color exposure display
152–153; goal posts 150–151, 152; good vs. bad Film Rec 173, 175
138; histograms 150, 151; indicators in camera flm-to-video transfer 471
149–153; and ISO 140; and lighting 268; light
meters 157–158; and magnifcation 467; mon- flter factor 240, 379
itors 160–162; RAW video 144–145; response flters 200; Bayer flter 103, 108, 108, 109, 110, 111;
curve 142–144, 142; strategies 154–157; strobe color compensating flters 201, 238; contrast flters
462–463; theory 138–140; tools of 146–148; 236; conversion flters 236; corals 239; day-for-night
trafc lights 150, 151–152, 151; types of 144– 467; difusion flters 233–236, 234, 236, 237–238,
146; waveform monitor 148–149, 157, 158–160; 239, 242; efects flters 235, 236; grads 200, 236; hot
zebras 149–150, 150, 160, 161 mirror 115–117; for industrial sources 201, 204;
Exposure Index (EI) 140, 183 IR 115–117, 117, 241–242; light balancing flters
236; neutral density 46, 116–117, 117, 235, 236,
extended range 172 242, 303–304, 308, 468; Optical Low Pass Filters
extension plate 357 106; sunset flters 235, 236; warming and cooling
238–239
extensions 446
flter tags 398
extension tubes 378–379
Final Cut Pro X 481
external recorders 426
Fincher, David 111
external sync 121
fre efects 310–311, 464–465
extreme close-ups (ECU) 71, 72; lighting for 464;
tools 377 First AC 390, 392–393
eyeball sunglasses 468 frst-person storytelling 77
eyelight 294, 298 Fisher dolly 356, 358
cinematography:.theory.and.practice..493.
fags 301, 323 framing shots 68, 69
fame bars 465 Frazier lens 379
fange focal depth 380 Freedom! 214
fare 45, 45 freeform method 84, 85
fat front lighting 165, 166, 265, 285; avoiding 274; freeform pass 84, 85
and depth 267, 271; and shape 266
free run timecode 482
fat space 21, 42
frequency meter 474
Fleabag 79
Fresnel, Augustin-Jean 244
fesh tone accuracy 135
Fresnel lights 244, 249, 252–253, 252, 253, 275, 277
FLEx fles 428
frontal nodal point (FNP) 374
ficker 471–475
front porch 358
ficker boxes 296, 311, 407, 465, 466
Frost flters 318
ficker-free ballasts 247, 474
f/stops 141, 148, 174–175, 367, 372–373
foating rigs 336
full shots 71, 71
fuid heads 347, 353, 355
full swing 172
fuorescent lights 199; color-correct fuorescents
255–256, 257; gels for correcting 202–203 full well capacity 166
foamcore 260, 261, 272, 292, 300, 326 function shots 68, 69
focal length 5–6, 7, 39, 40, 42, 366–367, 373
focus 367–368; apparent 368, 372; back focus 380; gafers 389, 441, 480–481
bokeh 378; circle of confusion 368, 370; critical gafer’s glass 385, 385
368, 372, 392; depth-of-focus 371; determining
392; hyperfocal distance 372–373, 374; light 385; gain (control) 224, 225
mental focus 368, 370; rack focus 44–45, 45, 46, Gallart, Michael 252
374–375; selective 2, 44–46; Siemens Star 136; see
also optics Galt, John 184
focus marks 393 gamma 169; Cinegamma 173; control 224, 225;
correction 132, 170, 170; curves 178; encoding
focus puller see First AC 173; flm gamma 169, 169; Hypergamma 173–
foley 480 174, 175; midtones 224, 225; in RAW video
174; video gamma 169–170, 169
foot-candles 140
gamut 196, 197
footroom 172, 176, 180
gangboxes 445, 452
foreground 39, 39, 41, 285
garage door light 288, 288
foreground plate 456
Garfeld mount 363
found footage 79
geared heads 347, 353, 353
four ways 453
gel(s) 200, 200, 284–285, 324; color correction
frame 18–19, 67; aspect ratio 33–34, 35–36; balance 202–203; conversion 200; families 203; light
of exposure within 144; composition rules 28–32; balancing 201–202; party 200
defning 4–5; entering and exiting 61, 62; forces
of visual organization 23–28; frame rates 117– generators 436–437; blimped 436, 437; cable
118, 141, 473, 475; height of 98; neutral axis shot crossings 437, 439; large, operation of 437;
for exiting 62; principles of composition 19–23; paralleling putt-putt generators 439, 442, 443;
static 67–68, 67, 68; width of 98 small, operation of 437–439; types of 437
frames per second (FPS) see frame rates genlocking 121
frame transfer CCD 107–108 geography, establishing 87–88, 88
frame-within-a-frame 5, 6, 20, 26, 30 ghost load 248, 437, 460
framing charts 472 Gilliam, Terry 33
494..cinematography:.theory.and.practice.
Girl With a Pearl Earing 273, 273 grip truck 328, 395
Gladiator 13, 44, 211 Gross, Mitch 101
Glimmerglass flter 239
® ground fault circuit interrupters (GFCIs) 451, 452,
453–454, 454
global shutter 112
goal posts 150–151, 152, 325 grounding safety 451, 453
Siemens Star 136 SMPTE color bars 120, 122, 123, 124–125
signal path 97; digital 96–99; Digital Signal Proces- snake bites 445, 452
sor 96; HD recording 96–97; RAW 99–100; ultra
high-def 97–99 Snatch 213
signal-to-noise ratio (SNR) 115 snoot boxes 257, 283, 465
silicon-controlled rectifers (SCRs) 460 snoots 302
silks 288, 301, 301, 303, 305, 323, 327–328, 333, 334 snorkels 378, 379
Silverstack 425, 426 snot tape see transfer tape
Singh, Tarsem 215 Socapex box 438
single (close-up) 71 sodium vapor lights 201, 203, 204
single extensions 446 softboxes 302, 305
single net 301, 305, 324 soft cookies 275, 307
single-phase electrical system 434, 435, 435, 449 soft edge grads 236
single system sound 477 Soft FX 1/2 flter 240
®
sinuous line 6, 24, 24 soft light 255, 257, 264, 269, 292, 300–301, 302;
skin tone line 131 contained 273, 273; eggcrates 300; overhead
284, 294; and shadows 269; working with 270,
skin tones 126, 135–136 272–273
Skyfall 66, 77, 265 SoftSun 100 256
sky light 285 SoftSun lights 256–257, 256
skypanels 244, 340 solids 301
skypans 252 Songs from the Second Floor 68, 68
slate (clapper) 393, 480, 481 Sony 103, 105, 150, 426; F3 198, 199; hyperga-
slating technique 410, 412–414; blurred slates mma 174, 175; Rec.709 188; S-Gamut 181–
417; changing letter on slate 415; European 182; S-Log 179–182, 180, 182, 183, 186, 188
system 416, 416; insert slates 415, 418; jam- Sorcerer’s Apprentice, The 460
ming the slate 412, 482, 482; MOS slating
411, 412, 414; multiple camera slating 411, sound mixer 410
414; pickups 416; reshoots 417; second sticks Source Fours 258
414; second unit 417; series 417; tail slate 411,
413; takes 415; timecode slates 412, 414, 414, Southon, Mike 214
481–482; verbal slating 410; writing content space: compression of 38, 42–43, 370; fat 21, 42;
415 and lens 5–6, 38, 40–42, 42; negative 26, 26, 30;
sliders 363 perspective 6, 9; positive 26
Slocombe, Douglas 77 spacelights 257, 259, 338
S-Log 179–182, 180, 182, 183, 186, 188 space movies 387
Slovis, Michael 208 sparks 400
slow disclosure 88, 89–90 special efects shots 376
slow motion 43 spectral locus 196
504..cinematography:.theory.and.practice.
Spectra Professional light meter 106 sunguns 259
specular light see hard light sunlight 285, 287, 288, 289, 418
Speed Rail 250, 258, 259, 325, 331, 333, 339 sunrise shots 418
spider boxes 444, 445 Sun Seeker software 418
spinning mirror shutter 112 sunset flters 235, 236
Spinotti, Dante 102 sunset grads 236
splash boxes 363 Sunset Song 467
splinter unit 402, 417 superwhite 176
split diopter 376, 378 Suspicion 9, 11
splitters 450 swing-and-tilt mount 379, 379
spot meter see refectance meter sync generators 121
spray shots 462 syncing, audio/video 481
Spyder5Elite 124, 127
SpyderCheckr 24 125 tail slates 411, 413
square-wave 247 taking lens 368
squibs 469 Tangent Wave 2 222
SRMaster recording 103 Tascam DR60D 482
stabilizer rigs 351, 352, 352 T-bones 336, 339
stage service 436 technical issues: audio 477–482; dimmers 458–463;
efects 464–468; ficker 471–475; greenscreen/blue-
standard defnition (SD) 100, 120, 126 screen shooting 456–458, 456–459, 462, 463; gun-
Stapleton, Oliver 304, 336 shots and explosions 469; high-speed photography
463, 464, 464, 465, 466; lighting for extreme close-
static frame 67–68, 67, 68 ups 464; lightning 468; time-lapse photography
Steadicam 344, 346, 347, 355, 361, 362–363 469–471, 474; virtual reality shooting 475–476, 476
Steele Chart 470, 474 Technicolor flm 15, 166, 206
steering bar 358 Technocrane 359, 362
Stewart, Jimmy 53 tech scouts 387
sticks see tripods telecine 223
stingers 446 telephoto lenses see long lens
Storaro, Vittorio 207, 386 Tentacle Sync E 481–482, 482
Stranger Than Paradise 68 Terminator 2 54
strobes 461–463 test cards/test charts 130; AbelCine resolution test
chart 136; calibration 133–136; gray card 130–132
strobing efect 347
texture: and exposure 155, 159; and lighting 13, 267,
studded C-clamps 333–334, 335 275, 291; principle 20; visual 11, 13, 13
studded chain vise grips 342, 342 textured black 155
studio soft lights 255 textured white 155
studio swing 172 theater: breaking the fourth wall 79; vs. flm 66–67
stunt cameras 402 thick negative 139, 157
stylization 15, 15 thin negative 139, 157
subjective point-of-view 77, 78–79, 78, 79, 89 third grip 400
sub-pixels 106 Third Man, The 34
suicide rigs 360 third-person storytelling 77
cinematography:.theory.and.practice..505.
three-dimensional feld 21 462–463
threefers 441 tungsten sunguns 259
three-phase electrical system 434–435, 434, 435, 449 turnaround technique 55–58, 82
Three Way Color control, in DaVinci Resolve 223, turtles 336, 339, 340
226
TV efects 465–466
Thunderbolt RAID storage unit 429
Tweenie Fresnel light 252, 280, 304, 311
tie-in clamps 444
twist locks 444–445
tie-ins 439–441
two shots 71, 75
Tifen, Ira 239
Tyler mounts 362
TIFF format 103
tight close-up 71, 73, 77
U-ground adapter 436
tilt 346, 347; diagonal 23; Dutch tilt 32, 34, 355
Ultrabounce 20, 272
tilt plate 356
Ultra Con 1 flter 240
®
waveform monitor (WFM) 120, 120, 145, 196, Y/C display 122
458; and exposure 148–149, 157, 158–160; Y-cords 248
cinematography:.theory.and.practice..507.
Yniguez, Santiago 361
Z-bar 358
zebras 149–150, 150, 160, 161
Zeiss F/0.7 still photo lens 90
Zeiss Prime lens 138, 368
Zhang Yimou 218
zip extensions/zip cord 446, 453
zip lights 255, 255, 257
Zodiac 72
zolly 380
Zone System 130, 147
zoom 348–349; and back focus 380; and depth-
of-feld 376–377; and dolly shot, diference
between 348–349; hiding 348; reverse zoom
380
Zoom F4 recorder 482
Zoom H4n recorder 480
Zsigmond, Vilmos 21, 302
508..cinematography:.theory.and.practice.
THE BASICS OF FILMMAKING MOTION PICTURE AND VIDEO LIGHTING THE FILMMAKER’S GUIDE
SCREENWRITING, PRODUCING, DIRECTING, FOR CINEMATOGRAPHERS, GAFFERS & TO DIGITAL IMAGING
CINEMATOGRAPHY, AUDIO & EDITING LIGHTING TECHNICIANS FOR CINEMATOGRAPHERS, DITS & CAMERA ASSISTANTS
This book was designed for flm stu- Lighting is at the heart of flmmaking. This book covers both the theory
dents, people learning flmmaking on The image, the mood, and the visual and the practice, featuring full-color,
their own, and people in the flm busi- impact of a flm are, to a great extent, in-depth coverage of essential ter-
ness who want to move up to the next determined by the skill and sensitivity minology, technology, and indus-
level. It covers: of the director of photography in using try-standard best-practices. Interviews
• Screenwriting lighting. This book explores technical, with professional cinematographers
aesthetic, and practical aspects of light- and DITs equip you with knowledge
• Producing ing for flmmaking. Chapters include: that is essential if you want to work
• The AD department • Scene lighting in today’s motion picture industry,
• Directing whether as a cinematographer, DIT,
• Lighting as storytelling
Digital Loader, Data Manager, Camera
• Scene shooting methods • Lighting sources Assistant, Editor, or VFX artist. Topics
• Continuity • The lighting process include:
• Cinematography • Controlling light • Digital sensors and cameras
• Lighting • A lighting playbook • Waveform monitors, vec-
• Editing torscopes, and test charts
• Color
• Audio • Using linear, gamma, and
• Electricity & distro log encoded video fles
• Set operations • Gripology • Exposure techniques
• Art department • Set operations • Understanding digital color
• Data management
• Greenscreen/bluescreen • Codecs and fle formats
The accompanying website has videos
on methods of shooting a scene, direct- • Lighting plans for small, • The DIT cart
ing, cinematography, lighting, con- medium, and large flms
• Data management
tinuity, color, and other flmmaking • Sample equipment orders for
small, medium and large projects • Workfow from camera
topics. to DIT cart to post
It also includes usable forms for all In addition, a robust companion web-
site includes up-to-date video tutorials, • Using metadata and timecode
aspects of flmmaking: budgets, sched-
ules, location scouting, daily reports, lighting examples, color control, types The website includes interviews with
camera reports, and many more. of difusion, and other resources. top industry DITs and Colorists.
cinematography:.theory.and.practice..509.