Cinematography FAQ

Cinematography FAQ – Frequently Asked Questions

Here is a list of frequently asked questions that cinematographers & filmmakers get asked about the cinematography process and the vast disciplines of camera equipment, lighting, grips, gaffers and so much more. If you don’t see answers to your question on cinematography submit a new FAQ to our knowledgeable experts at Tread Productions today.

What is HDV Format?

High-Definition Video (HDV) is a format for recording video on DV cassette tape.  The format is an affordable format used with digital camcorders.  HDV quickly was used by many amateur & professional videographers when it became available due to its low cost, image quality and portability that is acceptable format used by many on professional productions.

Can HDV be used and shown on ordinary televisions, DVD players etc.?

HDV is a highly compressed high-definition format. As a result, it is not normally or naturally compatible with standard definition televisions or DVD players and such. However, both the camera and the video tape recorder (VTR) have a “down converted” output that can be used to play on a standard definition TV. This output method will provide a standard definition (lower detail) version of the HDV original recording.

However, if you are going to edit HDV footage, it can be captured into an editing system either via the cameras / players firewire connection or alternatively through a firewire to a serial digital video converter. From here your editing system will be editing a high-definition master that at the end of this process, the HD masters or down converted SD masters can be created to play on TV systems or to create DVDs.

Do I need a light meter, which one to buy?

Understanding and controlling the exposure on your camera is one of the most important factors in enhancing the production values of your film. Although with most modern cameras, particularly video cameras, that come with automatic exposure functions, most filmmakers find it extremely useful to have and use a light meter on set. Moreover, if you’re going to be shooting on film, a decent light meter is considered essential.

The components that are used to build today’s light meters are expensive, and this is reflected in their market retail price. The good news, however, is that for most independent filmmakers, it isn’t necessary to fork out for the latest meter produced.  Basic light meter technology hasn’t changed that much in many decades, so if you don’t need flashy additional features, you can get just as good an exposure using an older light model. And as many of these meters don’t have a lot of electronic components, they don’t need batteries either.

Hidden in many DPs’ kit bags are a trusty Weston 5 and even older meter, such as the Russian Leningrad IV, that are still just as useful today. More modern meters such as those made by Sekonic (L-508) and Minolta (Spot meter F) can still be picked up for reasonable prices, particularly when buying second hand. However, if you’re considering buying a newer model light meter, there is an interesting discussion found on cinematography.com about this subject.

Another useful trick is also to have a decent single-lens camera (SLR) for stills handy on film set. Apart from taking stills for your film, you can the SLR to check your light readings with its built-in spot meter. This is particularly useful when you can’t get close to the subject, but you can also use it to check the calibration of your standard exposure meter as well.

However, having a light meter with you is only part of the equation – you also need to become well-versed in its use. Reading any decent book on cinematography should give you the general knowledge you need for use, and from there it’s all about your experimentation and experience.

How can I get a “big budget” look for my lighting without spending a fortune?

This general question today appears in many filmmaking forums with reasonable frequency, and you can get some good basic answers from the many anonymous posters found the Internet on rec. arts. movies. production.

Many Directors of Photography (DOP) say, “Light effects are a relation between the light source and the film. For example, a nice orange effect can be created by using tungsten light with a daylight balanced film. Or a neat blue effect by using HMI lights with tungsten balanced film. Green effects are most often achieved today by using non-balanced fluorescent lights (meaning fluorescent lights not intended for film/TV production). Additionally, if you are using tungsten lights with tungsten balanced film, you can use a blue Gel on the light to make it appear blue. Same with HMI/ daylight (orange gel can be used to make it orange) or green gel on some lights to make it green.”

Another lighting men say that it is possible to buy “work lamps” from your local DIY stores etc. These have bulbs of 500 watts so if you want lots of lights in your shot, this is a cheap way of achieving that and they cost about $20 each.

Another tip or technique is to look at other similar scenes that have already been shot that are like the one you are currently working on and to simply match the previous lighting by using cheaper sources of light available. Try going in closer or zooming in on your shots. Don’t use huge sets that need lots of light. Learn from the film noir techniques of old and learn to utilize your creative, artistic imagination! That doesn’t cost anything at all!”

How can I make a “Steadicam”?

To make a Steadicam is quite easy and can be quite cheap. Plans on making a Steadicam can be found on many websites that have comprehensive guides on how to make your own.

Steadicam (or camera stabilizers) are attachments used to capture smooth looking video even when the camera and camera operator are in motion. The camera operator using a Steadicam can be moving i.e., walking (even jogging), moving through tight hallways and doorways, or even climbing up and down stairs without shaking the camera. Unfortunately, professional Steadicams cost around $1500. Even a cheap 3rd party one will cost you $600+. Not exactly a bargain, considering many of us use cameras in that price range.

Whether you are an aspiring filmmaker, a videographer, the family documentarian, or just want more utility out of your video camera, you’ll appreciate using a Steadicam.

If you are a DIY person or know what you are doing, you can probably build a functional Steadicam in about 20 minutes. It might take you an hour or longer if you must read this webpage while you do it or aren’t very good with using hand tools. If so, here is a link for a building a cheap Steadicam.  Note: I should warn you that improper or irresponsible use of a Steadicam can quickly result in damage to your equipment and/or injury to yourself.  Build and use at your own rick.

How do you convert feet to running time when shooting on film?

To calculate the total running time, add together all the running times for your individual reels in an audiovisual film. If the total running time is more than five minutes, round it off to the nearest minute. If the total running time is less than five minutes, indicate it as minutes and seconds.

Working out the running time for a film is simple.  There are many online calculators to make this conversion easy and simple.

Film Footage Calculator     Run Time Calculator          

Example:  How many minutes is 400ft of film?

Answer:  There are roughly 3.5 minutes in a 400ft roll of 35mm.

Film Formats
Film Formats

Note: remember changing the camera speed (increasing/decreasing the frame rate) will affect these ratios.

How do you film a computer/video screen without it flickering?

Filming of a computer screen requires the camera speed to be synchronized with the scanning rate of the monitor. There are several ways you can do this.

1) You need a CEI film/video synchronizer which connects to the monitor/computer and replaces the camera’s internal sync with that of the monitor. Next, you must start your camera rolling as the monitor starts any given scan otherwise, you will get a black interference line in the picture. This can be frustrating to achieve but takes persistence to fix.  All you do really is stop and start your camera while looking through the viewfinder until you get a shot where the line is not visible. Note: that this solution is only suitable when you’re shooting with film.

2) However, there is a cheaper more convenient way to do it. All you really need is use a speed control that goes down to the one-thousandth of a frame per second rate and has a phase button. These can be rented for less than half the cost of the CEI f/v sync for around $300 day/wknd. If have a newer camera like the ARRI SR3 film camera, these features are built into the camera. Simple operation of the camera and adjusting the frame rate make it easy to adjust until the dark bar completely stops moving.

Another option is to rent a device called Cinches. This device measures frequencies. It works on lights, monitors, etc. But only gives readouts to the hundredth of a frame per second, and when checking monitors, it gives field rate, not frame rate. Note this helps but you still need the speed control and the two are often still cheaper to rent than the cost of a CEI f/v sync box.

If you have the $$$ money… the sync box is probably best the way to go. It is the most accurate, it is constantly connected to and controlling the camera so that if for some reason the frame rate changes the camera speed will change as well and common in computer monitors and programs that often cause the monitor to run at slightly different speeds. For more detailed information on filming computer screens can be found in the American Cinematographer Manual, published by the ASC.

How can I make my video look like film?

The short answer to this rather repetitive question is simple: if you want your movie to look like it was shot on film, then shoot it on film.

There are many products out there that can approximate a film look, not one of them currently stack up against the real thing. Film is film, video is video. The very way in which the image is captured is so different between the two mediums, it’s hard to get an exact match.

Technically speaking, video is capturing in RGB, meaning the picture is effectively captured using three cameras in one: one sensitive to red light, another to green light and a third to blue light. This is very close to the way our eyes see. We have sensitive cells for red, blue, and green light plus cells that are sensitive to light.  So, what we see on video covers greatly our own experience of vision. In video reproduction there are 25 or 30 still pictures passing through per second to create the illusion of motion. During recording each still picture was created as the view is being scanned from top to bottom from left to right in a weaving manner.

What happens to film is the light is being captured in CMYK, sort of the same principle used in quadric print production, when there are four photosensitive layers on the film. One for the magenta-colored elements in the viewed picture, one for the yellow, one for the cyan and a fourth for light strength in general. The fourth one is especially for dark areas and good light contrast. The charm of film reproduction is in the fact there’s an error in color. In film reproduction there are 24 still pictures passing through to create the illusion of motion. During recording each still picture was scanned in one shot just like an ordinary Kodak camera does.

How do I shoot a scene inside a moving car?

Shooting inside a car is simple but can tricky. It often needs a bit of lengthy preparations and some patience. There are three simple ways of taking a shot inside a moving car:

1) Place the camera inside the car. You can keep it on the rear seats while your subjects are in the front seats and shoot an over-the-shoulder shot. Or you can shoot from the front seat while your subjects are in the rear and get a mid-shot. You can place the camera in between the two front seats.

Another way is to sit beside your subject and take a close-up shot. If two subjects are involved than take their shots separately and ask them to act like they are talking to each other. You then can mix these two shots while editing and create the required sequence desired.

2) A second trick is to attach the camera to the bonnet and pre-focus it. This is effective when only one character is involved. But the major dis-advantage is the immobility of the camera and occasional jerks. You can try using a gyro machine if you have the bucks for its cost. Another trick is to shoot the scene on a smooth highway or somewhere else. Now for how to attach your camera to the bonnet… you can use this trick by using assortments of wooden blocks to get the proper angle and attach the camera with some crazy glue that works well.

3) The third and most skillful way of taking the car shot is to attach the camera to the end of a boom and by taking the shot from another car.  The advantage of this method is the easy movement of the camera. You move the camera back & forth by altering the speeds between the two cars. Doing so provides an easy way to capture the glamour on your moving car shot.

How do I white balance digital video?

The camera needs to have a white balance button which by pressing you can calibrate the needed white balance, and then focus the camera on a white area where you’re shooting your footage. The easiest way is to focus in on a piece of white paper, and then adjust the white balance dials until correct balance is achieved and set. Note: this must be done each time you change light conditions.

How do you create the ‘frozen time rotation’ trick?               

Many commercials, music videos, and films use a technique where the camera subject appears to be ‘frozen in time’ and the camera then changes angle to create a frozen and rotation effect. This cool technique became extremely popular after it was used in “The Matrix” movie.

The effect is created by setting up several cameras (i.e., 125) in an arc around the subject. Each camera used photographs the subject in a slightly different angle and during post-production, then footage from all the cameras used is edited on a frame-by-frame basis to create the effect of a slow dolly around look while time has stopped.

Today however, there is software available that allows you to capture this effect in post-production with images from as few as two or more cameras. The effect is created by using the shots from each camera as key frames and then having the computer render the frames in between that is like morphing. Fact: UK production company Time-Slice Films was a pioneer in developing this technique and has more info on their web site.

How do you create the effect where the action speeds up and slows down in the same take?

This technique is called “ramping”. It involves increasing or decreasing the camera framerate, while adjusting the shutter angle to compensate for F-stop adjustment during the ramp.

There are camera accessories, such as the Arri RCU, which can be fitted to a camera that is not designed to ramp that can accomplish the same thing by adjusting the iris ring on the lens during a ramp. The reason why the shutter angle and/or stop is adjusted during a ramp is because the differing frame rates require different levels of light to be made available on the film.  Example: if your stop is 5.6 @ 24 fps, and the end of your ramp is 48 fps, you will need to open a full stop, because the film is running faster through the gate, and requires additional light to properly expose. The stop for @48 fps would be set at 4. You can set all these settings easily with the Arri RCU. During the ramp, the camera will slowly adjust the shutter to the exact rate needed as the camera ramps up or down; however, it doesn’t happen all at once, so timed adjustment is critical to make the shot look right.

If you have budget constraints, you can simply shoot at higher-than-normal speeds and controlling the “ramp” using a non-linear editing program like the Avid Media Composer or Adobe After Effects.  Adobe Premiere can often yield better results with much greater control over exact times of the speed change. The results can also be fine-tuned even further by having the ability to control the ramp speeds to whatever you desire from x whatever the duration you desire to zero ramp. The reason for the higher shooting speeds is to eliminate the motion stutter or strobing when attempting to slow down shots in post-production.  However, if you shot at higher filming rates (50fps to 100fps), speeding the shutter looks like 25fps and results in no strobe like effects and the only perceptible change is the lack in motion blur.

How do you do the zoom trick, where the background moves closer while the subject stays in the same place?

This weird zoom trick, where the background seems to either be moved towards the camera or pushed away, is done using a combination of zooming-in while moving the camera away from the subject or vice versa. The results of this effect make the subject remain still in the shot, while the background moves. This neat technique is an excellent tool when used effectively, however it is harder than it seems to do it well.

This technique is also known as blowing out the background and is used as a suspense-building technique in suspense and horror movies. Alfred Hitchcock was one of the first to use this technique, although there are good examples of the affect in the movies Jaws and Goodfellas (extremely slow). This technique is even parodied in the Kevin Bacon film “The Big Picture”.

How do you create shots where the camera seems to pass through a window or glass object?

To create the affect, editing is key as well as being able to open the glass.  Follow these steps to create the shot: Mark and make a path to follow on the ground, then begin shooting up to the point of touching the glass and stop. Next, open the window, door, or glass object back up, and follow path and begin recording once again and push the camera through the opening, and once again stop recording. Then, go outside the glass if possible, and line up with the marked path and continue recording out a bit more. Once in editing, find the proper frames to splice it all together. The hardest part in creating this shot is staying on path which can be done using painters tape placed on the ground for the path to follow. Tip: placing a small dissolve, one that lasts less than a second, just when the camera passes through the glass will help add to creating the effect.

What is best way to film a person looking in the mirror?

The trick here is to place the camera on a different axis than the subject and the mirror. In most cases, the person looking in the mirror is not be able to see themselves, at least not in the same manner that they will appear on screen.

To film it, position the camera and the mirror so that the camera is just out of the mirror field of view, when seen through the viewfinder. Then position the subject so that they appear to be looking at themselves in the mirror, as when seen through the viewfinder. Again, the subject will probably not be able to see more than a small part of themselves, depending on the size of the mirror or may not be able to see themselves at all. However, they will be able to see the camera, and it is very important that you direct the subject to look at the reflection of the camera, as that will record as them looking directly into the lens.

How is deep camera focus achieved?

Deep focus is achieved by stopping the camera lens down. This requires a lot of light. Even though color negative films are now faster than the black and white stocks used in the 1940s, deep focus has never been used much in color (with a few exceptions). Some say that deep focus is less attractive in color. In black and white, everything is monochromatic and reduced to tone and texture – so using color curtain backdrop in the background is not going to distract from a face in the foreground even if both are in focus. When using color, a lot of details in your backgrounds are going to jump out at you, thus competing too strongly for the viewer’s attention. Therefore, deep focus would only work well in color if there was complete control over the art direction such as used in a period film.

Vista Vision the old-school “deep focus” king used a 28mm focal length lens and which had a field of view that slightly exceeded the field of view of the 35mm focal length B&L CinemaScope lens seldom used because of its severe barrel distortion.

In films that use tricks like split-dioptres and old school cameras with slant-focus to achieve deep focus, you can take a look at a few color films to study that have been lit to a high f/stop for a deep focus affect. Look at the Raiders of the Lost Ark film or much of DP Douglas Slocombe’s other film work like Lady Jane, where he often filmed with deep focus. Some of Peter Bogdanovich’s film work like in Texasville a 1990 drama film directed by Bogdanovich is a good example where he tried to work with deep focus, but that film also showed how color can get uglier when everything is in focus.

A problem with modern films is that it’s impossible to shoot an entire film in deep focus because some locations require to use a certain amount of available light. For example, you could light a bedroom room to f/11 but in the next scene you could be shooting a scene in an underground parking garage with low ceilings and fluorescent lighting with setting at f/2.8 – and then filming a car driving out of the garage into a city street at night shooting at f/1.3.  These lighting requirements makes it hard on deep focus use.

Lastly, another problem today is that many directors don’t really know how to stage their films for deep focus so what’s the point of lighting at f/11 if every actor stands ten feet away from the camera or the whole scene is shot using close-ups against a building wall?

What is the best information on working with fluorescent lights?

Fluorescent lights or “Flos” have no filaments; light is given off when gases are subjected to a current and their electrons become excited, and they create fluoresce. Household flos are manufactured today in warm white or cool white. In film terms, warm white is analogous to tungsten color temp (3200k) and cold white is approximately daylight balanced at 5600k. In practical photo terms Flos are corrected with minus green gel, a magenta color to help restore balance to either tungsten white or daylight white. However sometimes, if necessary, film light fixtures may have “plus green” gel added to correct or match existing fluorescent lighting fixtures.

Modern photography now uses many types of fluorescent fixtures commercially produced especially for film. They are lightweight, have accessories, and utilize special bulbs that are green corrected and produced by companies like Kino-Flo and Syne-Flo.  The Kino-Flo company is the best known and makes fixtures ranging from the Wall-O-Light, with ten 4 ft bulbs, to the micro fluorescent lighting thinner than a pencil.

Another concern when shooting films with fluorescent lighting is flicker affect. The chances of flicker occurring are directly proportional to the frame rate you are using and where you are shooting. What causes flicker is the dead time in between bursts of electricity from the electric utility company. In the U.S., it comes at 60Hz and in Europe, Australia, 50Hz, which means that there are120 bursts and 120 dead times per second. When using these electricity rates with fluorescent lighting means that there are 120 flashes and 120 spaces of no light per second. So, what…what this means for cinematographers is that you must shoot at frame rates that catch a flash while the shutter is open.

In the U.S., that means basically shooting at a frame rate at a multiple of 6 (i.e., 24, 30, 36, 60, 120). Using all other frame rates will produce flashing without an electric ballast. A ballast condenses these spaces of no light to very small brief periods of time, to the point that at very high frame rates flashing occurs around 400 fps.

With a camcorder, how can you produce the “Hollywood” movie look?

Let’s get real, you cannot shoot a Hollywood blockbuster with a camcorder. Today, domestic grade camcorder equipment is exactly that, domestic grade. It is produced to provide a hassle-free ability to allow you to go out and shoot your children’s birthday party or their trip to Disney World.  Domestic grade equipment is the norm to make it more accessible to market, less costly of manufacture, and at the end of the day results in loss of quality. That’s it a compromise. If you want perfect Hollywood images, you need to pay for perfect quality equipment.

Today, domestic equipment is getting better, particularly with the advent of DV and other digital formats, but the thing that will always be a downer is the lens used. Good lenses are expensive to manufacture so to keep the cost of domestic equipment reasonable, cheaper lenses are used, often made of plastic rather than glass, and generally function a lot slower than their expensive counterparts. These facts really mean you need more light to make your pictures expose properly, and there is a definite ceiling to the sharpness and color quality you can achieve without it.

However, you can push your camcorder results a little further if you spend time and more money on properly lighting your scenes. Let’s face it, you’ll never be able to compete with the high quality of 35mm, have a truckload of available gaffers, and an experienced Director of Photography with you, but you can get better images with better lighting. Obviously the most key thing to focus on is your lighting on your subject. But for overall image quality, you should need to light your backgrounds properly. So, if you know nothing about film lighting, it’s worth purchasing a good book and reading up on the subject.

Tip: You might find it a lot easier if you turn off the auto-exposure and auto-focus functions on the camcorder and adjust them manually – particularly if you plan on tilting/panning or moving the camera. If by chance your camera does not allow manual exposures or focus, then you’re making life extremely difficult for yourself and really need to consider upgrading to better equipment.

Should I shoot 16mm or digital?

This is perhaps the most forefront question in the minds of new and most independent filmmakers today. Like many of these questions, there is no straight best answer, individual circumstances are different, access to high quality equipment is not easy, many other factors influence this decision.

For many assuming that you are making a low or no-budget film, the number one question you should be probably be asking is, “Do you know anyone who works at a lab who can get you cheap processing?” If your answer is no, then your probably better off looking to shooting with digital entirely for the reason of cost. While it is easy to get your hands on a 16mm camera, and even cheap or free stock, it’s more than likely that the cost of processing will be well above your budget available.

Film processing today is expensive – anywhere between USD $0.15 – $0.50 cents per foot for 16mm, depending on your location. If you are shooting drama, you’re probably looking at a minimum shooting ratio of 5:1, meaning, you will shoot five times as much footage as you will see on screen in your final cut. 5:1 ratio is very tight and if you need to shoot complex scenes, you may be looking in excess of 7:1; documentaries that can easily get into double figures. Example: say you are shooting a 90-minute feature on 16mm at 5:1. 10 minutes of 16mm at 24 fps is approximately 400 feet. That means you need 3,600 ft for your final cut (9 x 400), and if considering a shooting ratio of 5:1, approximately 18,000 ft. So, in the end…even if you process at one-light (the cheapest), you are looking at forking out around $2,700 – $9,000 for your processing, and that doesn’t include light-balanced release prints, optical effects (i.e., fades, dissolves) or other crazy stuff.

If you have a nice budget of $10,000 or more for your film, 16mm is a realistic option as it offers superior quality and look to that kind of digital equipment you can afford on that budget. If your budget is less than this, you better know someone at a lab if you want to realistically shoot high quality film.

So, after deciding to go the digital route and aspirations of having your film seen by anyone besides your family and friends, you need to think about shooting on one of the preferred digital formats. Digital video cameras are getting cheaper all the time, and most available models use a single CCD that generally do not provide the level of quality you need for producing professional projects. The next step up or before pursuing the ultra-expensive pro gear, are 3 CCD cameras known as “prosumer” models. The most common of these are the Sony PD and Canon XL series, and retail in the region of $4,000, and can also be rented from a variety of camera production places.

What frame rates are movies shot at?

Traditionally, films have been shot using a frame rate of 24 frames per second. Although this may seem like an arbitrary number, it is a historical remnant from when sound was first introduced to movies back in the early 1930s.

Prior to the introduction of sound, all films were shot at 18 frames per second. This rate was chosen because it was the lowest speed at which you can project a series of images and evoke a phenomenon in the viewer’s eye known as ‘persistence of vision’ – where the gaps between the still images are no longer seen by the eye, thereby giving the illusion of motion. Film stock has always been expensive, so early movie pioneers did not want to spend any more on stock than necessary. Hence settling on the minimum framerate that would work in most cases. This also is the reason why silent films appear to be in ‘fast motion’ when played back today (a film shot at 18fps but played back at 24fps will appear to be sped up).

However, with the introduction of sound, 18fps meant the film moved through the projector too slow to play back sound with a natural pitch or sound. This is because early film sound was optically printed onto the stock next to the image and then read by a sound head in the projector. In keeping with the cost-saving mentality, 24fps was chosen as the new standard because it was the minimum speed at which sound could be played back at a natural pitch, while at the same time minimizing the amount of film stock (and therefore cost) of shooting a film. As a result, the standard frame rate remained at 24fps for over 80 years.

Of course, video has historically used alternative frame rates due to the fundamentally different way in which it captured images when compared to film. The two main video standards are NTSC and PAL. NTSC captured images at 60 fields per second (with a field being ‘half’ a frame), giving an effective frame rate of 30fps (actually 29.97fps for all you video engineers). PAL however captured images at 50 fields per second, or an effective frame rate of 25fps.

Many filmmakers believed that 24fps offered a more ‘cinematic’ look for their work produced, so not long after the digital revolution began, video cameras capable of shooting 24fps started to appear. Instead of capturing with two fields, these newer cameras were able to capture an image in a single progressive frame. 

More recently, some elements of the film industry have begun to get excited about the potential of high frame rate (HFR) cinematography. Proponents include individuals like James Cameron and Peter Jackson, whose film “The Hobbit: An Unexpected Journey” was the first major film to shoot and be shown in HFR – in this case, at 48fps however, James Cameron has indicated his interest in shooting at 60fps HFR. Time will tell whether if these formats have a future and is yet to be seen, as the release of “The Hobbit” seemed to divide both industry and audience, with the detractors pointing to the fact that the hyper-real nature of HFR seems to cheapen the look of the film and reduced the ‘cinematic’ feel.

What is a ‘T-Stop’?

T-Stops are the same as F-Stops and are used in the same way – that is to control the amount of light that reaches the unexposed film stock. The difference between the two is that a T-Stop has been calculated for an individual lens, whereas F-Stops are calculated based on a fixed formula which does not consider variations found in individual lenses.

When a lens is manufactured, there will always be slight variations in the focal length and aperture sizes. But in most cases, these variations will have no noticeable effect on the focus or exposure of the film, but in some delicate cases, it will. So, to provide cinematographers, with the most accurate information possible, manufacturers calculate the stops of a lens based on its individual behavior. The results of these tests are then included on the lens as T-Stops (more accurate F-Stops) and is one of many several reasons why pro lenses are incredibly expensive.

What is skip-bleach processing?

Skip-bleach Processing refers to a technique where artistic ends are achieved through a kind of “incorrect” processing of a color film. In film stocks, it’s silver that reacts to light. In color film stocks, when the silver reacts to light, it causes a color dye coupler to form a color dye next to it. In developing, the silver itself (turns black after exposure and development) is washed – or bleached out. However, if you leave the silver in the print “skipping” or “bypassing” the bleach step – the resulting image will have black silver that sort of contaminates all the colors. In the end, the contrast will increase, the blacks get dense, and the colors will get darker and become more de-saturated.

In Spielberg’s film “Saving Private Ryan” he made use of a variation of skip-beach processing called ENR, created by Technicolor Labs. In this case, they run the film print through a second black & white developer to develop in the silver permanently but with a skip-bleach print, you can in theory, still wash out the silver if needed.  Thus, by varying the strength of the developer, you can control how much silver gets left in…while the skip-bleach process leaves ALL the silver in. So, the ENR process can be as subtle or as strong as you prefer. For example, the film of the prints for the American musical “Evita” a used a 30% ENR, the prints for thriller film “The Game” used a 60% ENR, and in the epic war film “ Saving Private Ryan” used a 90% ENR.

What is the ‘Circle of Confusion’?

The Circle of Confusion simply refers to the behavior of objects that exist outside the focal range of a camera lens.  But more specifically, the circle of confusion is the measurement of where a point of light grows to a circle, and you can see in the final image. It’s also called, the zone of confusion, and is measured in fractions of a millimeter. Again simply, the circle of confusion is what defines what is in or out of focus. The number is also what calculates depth of field and the circle’s size is what affects the sharpness of an image. The smaller the circle, the sharper the image… or the larger the circle, the blurrier it becomes. Hence, it is often written as “CoC”.  Clear as mud right?  For more info about CoC, refer to     Kris Malkiewicz book “Cinematography” for his explanation on Circle of Confusion.

What is the difference between 16mm & Super 16mm?

When normal 16mm film is blown up to widescreen 35mm, the great magnification results in more graininess and a poorer image quality. This problem is further aggravated by the fact that the top and bottom of the frame are lost in changing the image to a wide-screen ratio [standard 16mm has an aspect ratio of 4:3]. Super 16mm was designed to alleviate these problems.

Super 16mm film extends the image into what was formerly the soundtrack area of the original negative. This provides not only a larger image, but one in wide-screen ratio. Thus, Super 16 requires less magnification when blowing up to 35mm, and hence there is a much smaller loss in visual quality.

Today, many of newer cameras come with adjustable gates to support both standard and super 16mm formats. Older cameras, however, must have their gates adjusted to allow for the increase in the exposure area.  Super 16mm is used on a lot of mid-budget films and television programs where there is a desire to shoot in film, but the finished product will be delivered on widescreen video. However, the popularity of Super 16 is somewhat now threatened by falling HD prices, but still represents one of the cost-effective ways of getting a good film look.

What is Timecode (TC) and how do you use it?

Timecode is a sequence of numeric codes generated at regular intervals by a timing synchronization system. Timecode is used in video production, show control and other applications which require temporal coordination or logging of recording or actions.

Film timecode is printed on the film by the camera. The actual TC is a machine-readable matrix with human readable (numbers) every second or so. Today, Aaton Code is the most widely used and supported film TC format.

What is Timecode (TC) used for?

Film timecode can serve two functions.

Timecode is used in video production, show control and other applications which require temporal coordination or logging of recording or actions.

The first, is to expedite the telecine process which is the process of transferring film into video and is performed in a color suite.  When film TC and (matching) Nagra or DAT TC are employed in production the dailies can be synched up much more quickly. In such a situation, a TC slate should be used, and each film take should be marked as usual. This way the telecine operator has the sticks to fall back on if the film or Nagra/DAT TC gets screwed up.

The second, way film TC can be used is in situations where a traditional slate is undesirable. The most common example of this is when you are in documentary production.

What special technical requirements are there for TC?

There are only three things you need to properly employ film TC:

1) The first, and most obvious, is a camera that is outfitted with TC. The ARRI SRIII fits the bill.

2) The second, thing you’ll need is a TC DAT or TC Nagra recorder.

3) The third, thing you’ll need is a telecine facility that can actually read the TC on the film. This is the most difficult of the three components you need to have or find.

Note:  There is one more critical thing you’ll need. That is an Assistant Camera Operator (AC) who knows how to operate the camera TC and a good Mixer who knows how to co-ordinate TC synching issues with the AC. This human component is extremely important TC requirement.

What’s the best material to use for rear screens?

There are several materials that you can use to create an effective rear screen.

If your budget is a little stretched, try using voile or scrim material. White scrim works quite well because it’s very diaphanous, however it’s a good idea to keep excessive stray light off it.

You can also try getting some Rosco diffusion gel from a theatrical supply house (it’s normally used for stage and film lighting). There are different grades of diffusion gel to use, and you’ll have to experiment to see which works best for you.

Many lighting specialists have had success with using Rosco E-Color+ White Diffusion #216. E-Color+ is a European product, but you should be able to match it up to other Rosco product lines in many other countries. Rosco’s Cinegel #3026, which is the same product only slightly more expensive due to a higher resistance to heat.

Which camera should I buy?

This is one of the most common questions asked by new filmmakers, but fortunately the answer is by in large quite simple: it doesn’t really matter.

Having an expensive camera does not suddenly make you a better filmmaker, so instead of blowing large amounts of cash on expensive camera kit, you should ideally concentrate on making as many films as you can instead. Lots of experience will make you a better filmmaker and ultimately, if your film is engaging and well made, it will likely find an audience to watch it, regardless of which camera you used to shoot it. With lots of experience and over time, you will produce much better films and then you can start looking at choosing the many camera formats which will suit your budget.

Having that said, when buying any type of camera for use in filmmaking, there are a couple of givens. First, you should only buy cameras that have a full set of manual controls. This means, manual focus, manual exposure, manual shutter speed, and ideally manual digital ‘ISO’ settings. Very low-end consumer cameras may be cheap, but they are designed for hassle-free shooting of holidays and weddings venues. For filmmaking, you need to have full control of your shot.

Second, the other main consideration is the camera to computer interface. As a filmmaker, you’re going to want to edit, so you’ll need to get your footage onto your computer one way or another. Whether this is done via USB, Firewire, SD card, or another method, you need to choose a good camera which has an interface that works with your editing system. Note: It’s important to remember that USB 1.0 and 2.0 in use is painfully slow for transferring large video files (USB 3.0 or later is recommended).

Today however, people are interested in shooting HD in some way shape or form and most entry-level cameras (DSLRs like the Canon 5D) shoot in a format called AVC/HD. This camera has the advantage of being compatible with the current versions of popular editing software such as Final Cut Pro or Adobe Premiere and will provide a pretty decent quality. That said, remember your ultimate picture quality is affected by a range of factors beyond the camera itself (particularly, lighting, lenses, DOP skill, etc).

Additionally, there are also plenty of MiniDV cameras around to choose from, and you may be able to pick up a prosumer version for a song compared to a newer HD camera. Prosumer MiniDV cameras, like a Canon XL1 or Sony VX1000, cost $3,000 – $5,000 when new, but you can find them for well under $1,000 these days. Using this type cameras may give you a better picture quality than some of the low-end AVC/HD cameras. HDV is also an additional consideration, however this format was quickly superseded by file-based HD cameras, and there aren’t that many models out there but can be found in the $1,000 – $2,000 price range (second hand).

Most importantly, don’t forget, if you positively must have an expensive pro camera, consider renting it first!  You can test it out and learn all about using it over a long weekend. Then, when you’re ready to shoot, just book one for the days you need and save a few bucks for other equipment you may need.

What does VTR mean?

A video tape recorder (VTR) is a recorder designed to record and play back video and audio material from magnetic tape. The early VTRs were open-reel devices that can record on individual reels of 2-inch-wide (5.08 cm) tape. They were used in television studios, serving as a replacement for motion picture film stock and making recording for television applications cheaper and quicker. Beginning in 1963, videotape machines made instant replay during televised sporting events possible. Improved formats, in which the tape was contained inside a videocassette, were introduced around 1969; the machines which play them are called videocassette recorders.

What does SLR stand for?

SLR simply stands for single lens reflex. SLR (single lens reflex) refers to the way these camera’s work. When a photographer presses the shutter button, a mirror flips out of the way to reveal the sensor. Some people also refer to them as DSLR, with the D being short for digital.

What does DOP mean?

Director of Photography (DOP), an alternate name used for a cinematographer.

What are HMI lights used for?

Multi-kilowatt HMI lights are used in the film industry and for large-screen slide projection because of their daylight-balanced light output, as well as their efficiency. The lamp is a favorite among film enthusiasts, with widespread coverage of the lamps and their history featuring in educational institutions.

What does HMI mean?

Hydrargyrum Medium-Arc Iodide (HMI) lights are the most used type of light used on the filming set. HMI lights emit an ultraviolet light with a blue hue. To power it up, HMI lights require an electrical ballast. The ballast ignites the metal-halide gas and mercury vapor mix used in the bulb itself.  Additionally, Hydrargyrum medium-arc iodide (or simply HMI) lamps operate by creating an electrical arc between two electrodes within the bulb. This excites the pressurized mercury vapor in the bulb and metal halides provide a very strong continuous light that’s been a favorite of filmmakers for many decades.

What is a Steadicam?

A Steadicam is a brand of camera stabilizer mount used for motion picture cameras invented by Garrett Brown and was introduced in 1975 by Cinema Products Corporation. It mechanically operates to isolate the camera from the operator’s movement, allowing for a smooth shot, even when the camera operator moves over an irregular surface.

What is Steadicam’s used for?

A Steadicam is a camera stabilizing system that is used to capture tracking shots with motion picture cameras. It isolates the camera operator’s movement and makes the camera shot look smooth and controlled, capturing all the action without any wobbles.

How do you calculate film run time?

To calculate the total running time, add together all the running times for the individual reels in an audiovisual film. If the total running time is more than five minutes, round it off to the nearest minute. If the total running time is less than five minutes, indicate it as minutes and seconds.  To make it real simple online (try Kodak) google to find several calculators to use to quickly determine your film run times.

What is a split diopter?

A split diopter is a partial lens that attaches to a standard camera lens and features at least two different focal planes. The lens attachment has the effect of greatly expanding the depth of field, so that the immediate foreground and the distant background can both be in sharp focus at the same time.

Why is the Bluescreen Blue?

All colors in our visual range are made up of a combination of the three primary colors, red, blue, and green. In the bluescreen process, an actor or object is filmed against an evenly lit (entirely one color) blue screen. In the compositing process, the blue element (the background screen) is removed via a color separation process. The screen is blue in color because blue is the smallest competent in the color of human skin (skin color has more red and green elements), so that when the blue color is removed, it does not affect the appearance of the skin color. This of course also means that the actor cannot wear certain blue clothing, or the object cannot have blue parts.

Today, digital technology has almost completely replaced the traditional compositing processes, and the color of the background screen is becoming less important as greater accuracy in color separation can be achieved with the use of computers. Did you know that In the television series Lois & Clarke: The New Adventures of Superman, the Man of Steel was filmed against a green screen for the flying shots to prevent his blue tights from disappearing into the composited background.  Fancy that!

For more information about bluescreens you can be find online many books on special effect techniques, at the Blue Screen/Chroma Key Page, Bluescreen.com and of course in the trusty old American Cinematographer Manual from the ASC.

What are the basic elements of cinematography?

Cinematography comprises all on-screen visual elements, including lighting, framing, composition, camera motion, camera angles, film selection, lens choices, depth of field, zoom, focus, color, exposure, and filtration.

What every cinematographer should know?

Here are ten basic concepts all cinematographers should be familiar with:

  • Aspect Ratios & Anamorphic Lenses.

    • Aspect ratio used to be a more prominent issue for digital cinematographers than it is today: before the advent of high-definition cameras, the standard 4:3 aspect ratio of standard-definition TV was generally seen as undesirable for anyone looking for a “cinematic” look, because 4:3 (or 1.33:1) content was associated with broadcast TV, while widescreen compositions were what people expected to see in the theater. When we say “4:3,” we mean the image is four units wide and three units high. When we “1.33:1,” we mean… well, you get it — the same thing. Many times, the “:1” is removed because it is implied – shooters will simply say “1.85” instead of “1.85:1.”
  • Bokeh.
    • Bokeh (pronounced like “bo” from “boat” and “ke” from “Kentucky”) is one of the chief reasons many shooters have switched to DSLRs. Bokeh is a term derived from the Japanese word “boke” which, roughly translated, means “blur quality.” Bokeh refers to the portions of an image that are defocused or blurry. In the filmmaker’s toolkit, bokeh is not only an aesthetically pleasing quality, but it also allows the filmmaker to focus the viewer’s eye on an object or area of interest in the frame. Bokeh is a function of shallow depth-of-field.

  • Compression & Bit Rate.

    • Compression refers to a method for reducing the amount of data a DSLR produces; in the case of video-shooting DSLRs, all cameras currently employ some method of compression. If you’re used to shooting photos in JPEG format, you are used to capturing compressed images; while RAW can also employ compression, it is generally thought of as “uncompressed.” This is because, most shooters are concerned, when we’re talking about compression, we’re they are talking about lossy compression — meaning, a codec (compression algorithm) that throws out data to reduce file size. As you can imagine, tossing out portions of an image has negative side effects, and while many codecs deal with images perceptually to minimize their perceived impact, the difference is there. For example, if you upload a video to YouTube, the service recompresses your video in order to optimize it for internet delivery; you might not notice this compression, but check out this video that’s been recompressed a thousand times (original here) and you can see that every compression step throws out data along the way. On the positive side, however, having lossy codecs are also the main reasons why we can record hours of footage onto inexpensive flash memory devices like CF and SD cards.
    • The most common compression formats in DSLRs are h.264 and MJPEG, and while both are lossy, h.264 is generally much more efficient. Bit rate is the amount of data per time that a given codec adheres to; using higher bit rates are almost always better because they use less compression. At press time, you will find out there are no DSLRs that shoot uncompressed video.
  • Depth of Field.
    • The amount to which objects in the foreground, mid-ground and background are all in focus at once is a function of depth of field. A shallow depth of field would mean that only one plane was in focus; a wide (or deep) depth of field would mean that all planes are in focus at once. Depth of field is determined by the focal distance and aperture size (see below for more info on Aperture). DSLRs exploded in popularity almost singlehandedly because of their ability to render images with a shallow depth of field. This is chiefly due to their massive sensor sizes.  On a basic level, shallow depth of field (DOF) allows filmmakers to blur out areas of the image they deem to be unimportant or undesired.

  • Exposure & Aperture.
    • Exposure refers to the amount of light allowed to enter the DSLR sensor (or any imaging surface). When shooting stills, DSLRs use a mechanical shutter to regulate exposure by opening for the desired amount of time (1/60th or 1/1000th of a second, for example) and then closing. DSLRs are generally rated to last for hundreds of thousands of shutter cycles, but at 24 frames per second, couldn’t your DSLR reach that limit very quickly?  No, because in video mode, DSLRs use an electronic shutter — the sensor basically turns on and off to regulate exposure, instead of relying on a physical barrier (the mechanical shutter) to regulate light. Aperture refers to the adjustable opening near the rear of the lens that lets light through — the amount of light it transmits is generally referred to as the F-stop (T-stop is very similar, except it’s measured instead of calculated). Note: keep in mind that the size of the aperture does not only affect the amount of light, but also the angle of light rays that are hitting the sensor — using a narrow aperture creates an image with a wide depth of field, whereas a large aperture creates an image with a shallower depth of field.

  • Focal Length.
    • Technically, focal length refers to the distance over which collimated rays are brought into focus. An easier way to think of it: focal length refers to image magnification. A longer focal length, line 100mm, makes distant objects appear larger, whereas those same objects will appear smaller with a shorter focal length, like using 35mm. Focal length also refers to angle of view; using longer focal lengths have a narrower angle of view, whereas shorter focal lengths will have a broader angle of view. Remember this… when it comes to focal length, a picture is worth a thousand words.

  • Frame Rate.
    • Frame rate is the frequency with which your DSLR captures consecutive images. This typically corresponds to the number right before a “P” in the case of progressive images, so that 24p is 24 frames per second, 30p is 30 frames per second, and 60p is 6,000,000 frames per second.

      Just kidding. Different frame rates have very different motion rendering characteristics, which, when combined with different shutter speeds, produces images that behave very differently. Motion pictures have had a standard frame rate of 24 frames per second since the 1920s, and audiences have come to associate this frame rate with cinematic content, so with that said, shooting at 24p is essential if you’re planning on shooting narrative material. However, remember you don’t always have to shoot at the same frame rate at you’re planning on distributing your material. Example, if your DSLR can shoot 60p, this is a very effective way of acquiring slow-motion footage — anything shot at 60p can be played back at 40% speed in a 24p timeline for a flawless slow-motion effect, and can generally be slowed down further with your editing system and software.

  • ISO & Noise.
    • ISO is actually, the International Organization for Standardization, which is why you see it used in lots of places beyond photography — many businesses are certified ISO:9001, for example. As cinematographers we’re concerned with just one standardization, however — the one that pertains to measurement of noise in photography. ISO as it relates to digital photography is based on analog standards of film speed — while we won’t be shooting a frame of actual film with our DSLRs, our cameras are calibrated so that an ISO of 400 on our camera is somewhat equivalent to a film SLR’s ISO 400. ISO is a logarithmic measurement, so ISO 400 is twice as sensitive to light as ISO 200, ISO 200 is twice as sensitive as ISO 100, and so on and so forth.

    • The relationships between sensitivity and noise are basically linear, however, so the higher the ISO, the brighter the image — and the more noise contained in the image. However, thanks to sophisticated noise reduction and other processing tricks, DSLRs have managed to dramatically reduce noise at higher ISOs and can often blow film stock out of the water.

  • Progressive vs. Interlaced.
    • Interlacing was a workaround invented for older tech CRT monitors in the early 1930s that has lived far too long. In the early days, video bandwidth was more limited than today, and so engineers found a way to divide a frame into two images and display it using alternating fields. Progressive scanning is a method that captures and displays the lines of an image in sequence, which is akin to motion picture film with regards to motion rendering. Compared to interlaced images, progressive images have a higher vertical resolution, lower incidence of artifacts, and scale better (both spatially and temporally). Today, while there are plenty of video cameras that shoot interlaced footage, every DSLR camera can shoot progressive footage.

  • Shutter Speed

    • Shutter speed refers to the length of time an image is exposed. For film SLRs, is measured by the amount of time the camera’s mechanical shutter is open, but for shooting video on DSLRs, it is simulated electronically. Shutter speed affects the amount of light that reaches the camera and affects the motion rendering of a moving image. Lower shutter speeds will yield a brighter and smoother image (up to and including water & light blurring tricks), whereas a higher shutter speed results in a darker and more stroboscopic image. Motion picture film cameras typically shoot with a 180-degree shutter speed, which means that the shutter is open 50% of the time (180 out of 360 degrees). This means the amount of time your shutter is open is half of the shooting frame rate; thus, at 24 frames per second, a 180-degree shutter is best emulated on a DSLR by choosing a shutter speed of 1/48.

What is the Wide-Screen Maze?

One of the most common areas of confusion for independent filmmakers working with digital video is in the area of widescreen. In many cases, filmmakers today want their movies to be watched in a wide-screen format, and more akin to what we’re used to with film, rather than the squarer aspect ratio of (4:3) found on television.

Many cameras, particularly the prosumer DV models, claim to have the ability to shoot widescreen (sometimes called 16:9), however this can be misleading because not all cameras are able to do true widescreen photography. Generally speaking, widescreen photography for digital video is handled in one of three ways:

1) Letterboxed

Most home camcorders and prosumer DV cameras (like Sony VX-1000, Canon XL-1) do wide-screen in Letterboxed format. Letterboxing is achieved by shooting a standard 4:3 picture and putting black borders on the top and bottom of the frame to create a picture shape that is the correct aspect ratio for the widescreen required. This is not true widescreen, and you are effectively wasting the black space in your image, effectively reducing the vertical resolution of your frame. The black borders are simply black video being recorded on the top and bottom of the frame! This can lead to huge problems in post-production, so it’s recommended to not use in-camera widescreen functions if your camera only shoots in letterboxed format.

If you’re planning on using letterbox formats later down the road, frame your shots considering the future position of the black borders, but add them in post-production process where you have more control.

Anamorphic

Anamorphic is a method by which you can get true widescreen images using a standard 4:3 camera. It is achieved by playing with the optics of the lens used – anamorphic lenses use the optics in the glass to stretch the widescreen image vertically and squish it horizontally, so that it will fit into the 4:3 ratio. On the playback end (use VTR, projector), use a complimentary lens (or digital process) to return the image to its original aspect ratio.

Confused yet?  Getting your head around the concept of anamorphic photography can be a little difficult to start with and understand. 

Shooting anamorphic is probably the best way for independent filmmakers using DV to achieve a widescreen result particularly if you are planning to blow up to 35mm because you are recording at the full vertical resolution of your camera however, the exact resolution differs depending on whether you are shooting PAL or NTSC. Note: You can get anamorphic adaptors and lenses for many of the prosumer DV cameras on the market today such as the Canon XL-1, although it will cost you around $350 for starters.

True Widescreen Photography

The third way to achieve widescreen is to use a camera has a rectangular CCD (rather than a squarish 4:3 CCD). Generally rectangular CCDs are only found in more expensive professional cameras such as those like the Sony’s DVCAM range and hi-def gear, although over time these may start appearing in prosumer equipment as well.

What 16mm cameras can be picked up cheaply?

Although the digital revolution has empowered filmmakers in ways never seen before, a good many people still prefer the look of film. For the budget-challenged, the prospect of shooting a low-budget feature (or short) on film is largely kept alive by the availability of relatively cheap 16mm gear. While new 16mm products will give you that empty wallet feeling, the age of the format means that there are a whole host of cameras which can now be picked up relatively cheap on the market today.

Bolex H-16

By far the most popular and easiest to use (and cheapest to buy) cameras around is the venerable Bolex H-16. The favored camera of many war news reporters, these cameras are versatile and tough; the only downer being that they are like clockwork. Yes, that means no batteries – you must wind them up and shoot away. Because of the popularity of these cameras, many enterprising individuals and companies have produced electric motors for the Bolex H-16 which can be attached relatively easy. Unfortunately, it’s not easy to shoot sync sound with a H-16, as they tend to be quite noisy and most standard models lack any kind of sync output devices, however for short takes and non-sync scenes, the camera is great. The single-frame mode has also made the H-16 extremely a popular choice for use on an animation stand.

The Bolex H-16 comes in two different types: reflex (sometimes called “rex”) and non-reflex. The different names refer to how the camera’s viewfinder system works. Older H-16s use a parallax viewfinder which is mounted on the side of the camera or rotated into position using a lens turret. Consequently, there is always a margin of error involved when using non-reflex cameras, due to the inability to look through the viewfinder during live filming. Creativity and practice can minimize the effects of this issue.

Newer H-16s make use of the reflex viewing system, which uses a prism to split the light entering the lens. Most of the light is delivered to the film gate, however a small percentage is redirected to the viewfinder. The advantage is, you can generally look through the viewfinder while filming. The disadvantage is that you need to adjust your exposures to compensate for the slight degradation in the intensity of the light hitting the film surface.

Reflex cameras are by far the easier of the two to work with, which of course also means they tend to cost a little more on the secondhand market. For the most budget option, consider using non-reflex models, or look for the early rexes (version 1-3), as newer models still cost close to $1000 and sometimes more.

Bell & Howell 70 Series

Also known as “Filmos”, the B&H 70 series is another dirt-cheap option and they’re extremely tough (US military used to use them). Like the H-16, Filmos are clockwork and you therefore can’t sync them to a sound recording device. Typically, you can pick up a Filmo body for $200-$600, depending on the condition and of course, where you purchase it. Lenses are usually extra, but the beauty of the Filmo is that it uses the standard C-Mount, which ensures your options are extremely wide.

Auricon

If you’re in the market for an el cheapo sync sound camera, you can keep your eyes out for an Auricon (any variety). While they’re not the most user-friendly cameras around, and shooting for one for any length of time will definitely give your arm muscles a good workout, they tend to be pretty cheap on the second hand market which is a good thing.

Beaulieu R16

Another good option in the sync sound department is the Beaulieu R16 (often refer to as a “Beuley”). These French cameras are rugged, versatile, and relatively cheap. The Beaulieu R16 takes a 100ft magazine inside or can be used with an external 200ft “mouse ears” magazine which sits on top of the camera. The cameras support speeds from a single frame up to 64fps, and some models have an auto-focus function (which only works with the correct lens). Most R16s also have a built-in light meter, although it’s very basic.

The Beaulieu R16 can be used to shoot sync sound however, they tend to be a bit noisy in a confined space. The magazines are also notoriously cumbersome to load, so you need to make sure you do all your loading in a comfortable place with plenty of time. Basic R16 packages tend to start at around $700 but can move into four figures if they include a decent lens which are usually preferred (many R16s come with the sought-after Angenieux 12-120 zoom lens which is worth as much as the camera body itself, if not more.

Krasnogorsk K-3

This Russian camera is solid, durable, and incredibly cheap. Bodies start at around $300, and you can pick up a package for around $2,000 for a new camera.  The basic K-3 is a clockwork camera, however there are options to add electric motors, and some can even shoot sync sound. There are also some K-3s out on the inner-web that have had a Super-16 modification. Some say the need for non-standard lenses is a major weakness of the K-3, however it’s possible to have them converted to a standard C-mount for not a huge amount of money, and you can even get adaptors for the major lens mount formats.

Arriflex 16S

A little more upmarket (both in image and price) is the trusty old Arri 16S. One of the most respected old 16mm cameras around, the 16S was the main workhorse of the indie filmmaker prior to the digital revolution. Robert Rodriguez shot El Mariachi with a borrowed 16S. Known for their rugged, no-nonsense design and manageable weight, 16Ss can be used for sync sound shooting (with the right motor) and will also do variable frame rates. They take 400ft magazines and standard Arri-mount lenses. Because of their dependability (the Arri brand), 16S tend to hold their value well if kept in good condition and consequently, start in the range of $2000 – $3000 on the secondhand market.

Others

Worth a look at is the Eclair NPR and the Cinema Products CP-16 (which was used for the film sequences in “The Blair Witch Project”. NPR packages start around $3000, while CP-16s can often be picked up for under $2000.

Where can you Buy Them?

Buying secondhand camera gear can present somewhat of a dilemma. The easiest place to find them is through camera shops and other dealers, however these people are more likely to be clued in on the value of the gear, resulting in higher prices. The flip side is that you’ll probably get a better deal through a private sale, but it requires more effort to locate such cameras and the condition can be variable.

eBay is of course another place to look for these type cameras, although it’s probably better to try the usual secondhand forums first in your area (newspapers, garage sales, etc). Although the cameras availability is beginning to dry up on such equipment.  Another good potential source of cheap 16mm camera gear can sometimes be found through organizations such as universities or government departments. The video and subsequent digital revolution means that these places sometimes have old film equipment lying around gathering dust which they’re willing to off-load if you can find them.  Happy Hunting!

What different film aspect rations are around?

Generally, there are five aspect rations that are commonly used in theatres today.

1.37:1 (often given as 1.33:1) – used for all films made before 1953 (when CinemaScope was introduced) and occasionally still used in Europe, also used for documentaries, 16mm and non-widescreen TV.

1.66:1 – used widely in Europe

1.75:1 – used in the UK, Australia, and New Zealand, sometimes in Europe. Close to the digital TV ratio of 16:9 (1.77:1)

1.85:1 – used in America/Canada and other places (particularly if the film is aimed at the US market)

2.35:1 – CinemaScope (also known as Panavision)

Occasionally you might also find…

SuperScope (used on the original version of Invasion of the Body Snatchers) and occasional other films like Days of Heaven.

2:1 – used for some obsolete CinemaScope processes.

2.2:1 – the ratio of 70mm (which you’ll often hear referred to as 65mm, which is the size of the negative – plus the extra 5mm is the soundtrack)

2.66:1 – used for some obsolete “ultra-wide” processes using 65mm negatives, on the film of “Ben Hur”.