Video Production Notes Unit 1
Video Production Notes Unit 1
Unit-1
Film cameras
View camera
Rangefinder/viewfinder camera
Point and shoot or compact cameras
Single lens reflex camera (SLR)
Twin lens reflex camera (TLR)
Specialty cameras
Digital cameras
Rangefinder/viewfinder camera
Point and shoot or compact cameras
Single lens reflex camera (SLR)
DSLR is the term for digital SLR cameras
Specialty camera
View Camera
Built like an accordion, with a lens in the front, a viewing screen in the back, and flexible
bellows in between.
Used for:
▪ Landscapes
▪ Architectural photography
Rangefinder/viewfinder camera
A compact, lightweight, camera that allows you to view the scene through a small
window.
Viewfinder cameras include inexpensive point ‐and ‐shoot cameras.
Rangefinders have a coupled rangefinder that allows manual focus.
Specialty cameras
Used for a specific purpose
Underwater cameras
Panoramic cameras
Polaroid Cameras
2. Lens ‐ Lens Assembly is several layers of lenses of varying properties providing zoom,
focusing, and distortion correction. The most important part of the camera.
4. Mode Dial ‐ Contains several symbols (differs by model), allows you to select a
shooting mode, automatic or manual or one of the pre‐defined settings.
5. Viewfinder – Small window that shows the image the camera's imaging sensor sees.
7. Focusing Ring ‐ found around the lens of SLR and DSLR cameras; turn to manually
focus the lens.
8. LCD Display ‐ In some compact cameras this acts as the viewfinder; small at the back
of the camera used for framing or reviewing pictures
9. Flash ‐ Built‐in on the body of most compact and some DSLR cameras; can be fixed
or flip type; provides an instantaneous burst of bright light to illuminate a poorly lit
scene.
10. Control Buttons ‐ Usually includes a set of directional keys and a few other buttons
to activate certain functions and menus, this is used to let users interact with the
camera's computer system.
12. Zoom Control ‐ Usually marked with W and T, which stands for "Wide" and "Tele”;
used to control the camera's lenses to zoom‐in or zoom‐out. For DSLR cameras, the
zoom is usually controlled by a zoom ring in the lens.
13. Battery Compartment ‐ Holds the batteries; vary in size and shape by camera
type/brand
15. Flash Mount (Hot‐Shoe) ‐ Standard holder with contact plates for optional flash
accessory.
16. Diopter ‐ varies the focal length of the lens in the viewfinder to allow people
wearing eyeglasses to see clearly through it even without the eyeglasses.
17. Tripod Mount ‐ where your standard tripod or monopod is attached for added
stability
TYPES OF LENS:-
TYPE OF TYPE
FOCAL LENGTH LENS OF PHOTOGRAPHY
Panoramic shots,
Fisheye cityscapes,
(Ultra- landscape, real
8-24mm wide) estate, abstract.
Interiors,
landscapes,
Wide architecture, forest
24-35mm Angle photography.
Portraits, weddings,
Standard street/documentary
35, 50, 85, 135mm Prime photography.
Portraits, weddings,
55-200mm Zoom wildlife photography.
Ultra detailed
photography (rings,
50-200mm Macro nature.)
Sports, wildlife,
100-600mm Telephoto astronomy.
Pros:
Cons:
Zoom Lenses
Zoom lenses are great due to their extreme versatility! These lenses allow you to stay in
one place (no running around or twisted ankles) and zooms to multiple focal lengths with
one auto focus function! A popular zoom lens is the Canon 70-200mm, which means it can
zoom as far out as 70mm and up to 200mm (and every focal length in between)! What’s
really cool is that zoom lenses (if on auto focus) can maintain that focus while you change
your focal length, which means you can snap quickly.
A zoom lens is a variable focal length lens. It allows the camera operator to zoom in and zoom out
on a subject without moving the camera forward or backward. The zoom lens enables the camera
operator to select any coverage within its range. Most video and television cameras come with
optical zoom lenses. An optical zoom uses a lens to magnify the image and send it to the chip. The
optical zoom retains the original quality of the camera’s chips
Pros:
Zoom capabilities, meaning you get multiple focal lengths in ONE lens.
It allows you to stay in one position while using the zoom feature.
GREAT for weddings, portraits and wildlife photography.
Cons:
Heavy.
Not as sharp as prime lenses.
Typically higher aperture, meaning many zoom lenses won’t go below f/2.8 (creating less
compression/bokeh).
CHARACTERISTICS OF LENSES:-
Focal length:- The term focal length is simply an optical measurement—the distance between the
optical centre of the lens and the image sensor (CCD or CMOS) when you are focused at a great
distance such as infinity. It is generally measured in millimetres (mm). A lens designed to have a
long focal length (long focus) behaves as a narrow angle or telephoto system. The subject appears
much closer than normal, but you can only see a smaller part of the scene. Depth and distance can
look unnaturally compressed in the shot. When the lens has a short focal length (short focus) this
wide-angle system takes in correspondingly more of the scene. But now subjects will look much
farther away; depth and distance appear exaggerated. The exact coverage of any lens depends on
its focal length relative to the size of a camera CCD’s image size.
Focus:- Focus is an incredibly effective tool for directing the audience’s attention. The eye is
always attracted to the part of the image that is in focus. This means that a knowledge of focus,
depth of field, and lenses is essential if the camera operator and director want to use the camera
creatively as a tool for storytelling.
a lens cannot focus on subjects closer than its “minimum focused distance” (MFD of MOD).
Telephoto lenses may have an MFD that is as little as a foot and a half away. It is not possible to
focus on subjects nearer than that. Extremely long lenses may have an MFD that may be several
yards/ meters from the camera. The wide-angle lens generally allows focusing right up to the
camera lens.
Depth of field:- Depth of field, or the focused zone, is usually defined as the distance between
the nearest and farthest objects in focus. When shooting something in a distance, everything seems
clearly focused. But refocus the lens onto something a few feet away, and only a limited amount of
the shot will really appear sharply focused. Now focus on something very close to the camera, and
sharpness becomes restricted to an even shallower zone. How obvious this effect is varies with the
amount of detail in the scene. The depth of field varies with the following factors:
Aspect ratio
Another important concept with the pixel resolution is aspect ratio.
Aspect ratio is the ratio between width of an image and the height of an image. It is
commonly explained as two numbers separated by a colon (8:9). This ratio differs in
different images, and in different screens. The common aspect ratios are:
1.33:1, 1.37:1, 1.43:1, 1.50:1, 1.56:1, 1.66:1, 1.75:1, 1.78:1, 1.85:1, 2.00:1, e.t.c
Advantage
Aspect ratio maintains a balance between the appearance of an image on the screen,
means it maintains a ratio between horizontal and vertical pixels. It does not let the image
to get distorted when aspect ratio is increased.
For example
This is a sample image, which has 100 rows and 100 columns. If we wish to make is
smaller, and the condition is that the quality remains the same or in other way the image
does not get distorted, here how it happens.
Resolution
The resolution can be defined in many ways. Such as pixel resolution, spatial resolution,
temporal resolution, spectral resolution. Out of which we are going to discuss pixel
resolution.
You have probably seen that in your own computer settings, you have monitor resolution
of 800 x 600, 640 x 480 etc.
In pixel resolution, the term resolution refers to the total number of count of pixels in an
digital image. For example, if an image has M rows and N columns, then its resolution can
be defined as M X N.
If we define resolution as the total number of pixels, then pixel resolution can be defined
with set of two numbers. The first number the width of the picture, or the pixels across
columns, and the second number is height of the picture, or the pixels across its width.
We can say that the higher is the pixel resolution, the higher is the quality of the image.
Operational requirements for ENG include fixed, nomadic and mobile applications ranging
from stationary reporting by journalists, mobile camera coverage of scenes of regional and
world conflict to aerial coverage of natural disasters. Much news gathering takes place in
the central business districts of major cities, including sites close to major airports through
to rural areas. ENG operations often involve the setting up of an unplanned P-P link or
series of links. For daily news gathering in major city areas, broadcast network operators
have utilized fixed collection sites operating in a number of bands for analogue or digital
ENG. ENG transmissions are consolidated from multiple nomadic operations over a large
(up to 100 km radius) area. ENG collection sites are operated, in most cases, by TV
networks in major city areas where the typical central collection site is located within the
city centre, on the roof of a high building, including a range of steerable and fixed. Many TV
networks often have an alternative dedicated ENG collection site mounted on their
broadcast transmission tower. In most cities these are located near the edge of the urban
area.
EFP SYSTEM:- The EFP system is similar to that for ENG, but may use more than one
camera to fed the output to separate VTRs.
Types of shots:
1. Wide shot (WS)- used to establish the location or setting, sets the stage, and can also be used to
introduce action, shows the whole scene, orientates the viewer
2. Full shot (FS)- frame a person from head to toe or completely frame an object. A full shot is used
either to establish or follow a character.
3. Medium shot (MS)- frame a person from the waist up. A medium shot is used to provide new
visual information or show a closer view of the action. It also adds visual variety in editing.
4. Long shot (LS) - are full shots, but show the person at a greater distance.
5. Head and shoulder shot (H & S) - frames a person from the chest up. The head and shoulders
shot provides a closer view of a character and can be used as a listening or reaction shot. This is the
standard framing for most interviews where there are two subjects engaged in conversation.
6. Close-up (CU) - head shot, just above the shoulders. This shot is used to provide a more intimate
view of a character or show expression. The close-up can also be used as a listening or reaction
shot, or to show the details of an object.
7. Extreme close-up (XCU) - frames a head shot from the tip of the chin to the middle of the
forehead, or any other equivalent space on an object, animal, etc. This shot shows drama or
tension.
Types of angles:
Bird’s Eye Angle: It is often used as an establishing shot, combined with an extreme long
shot to set the scene.
High Angle: The audience looks down on a person to make the character appear small or
vulnerable.
Neutral Angle: This is the most commonly used angle. It allows the audience to feel
comfortable with the characters.
Low Angle: The audience looks up at a character. This is used to make a character look
powerful and to make an audience feel small and vulnerable.
MOVEMENTS:
Pan - Horizontal movement, left and right, you should use a tripod for a smooth effect.
Why: To follow a subject or show the distance between two objects. Pan shots also work
great for panoramic views such as a shot from a mountaintop to the valley below.
Tilt - Vertical movement of the camera angle, i.e. pointing the camera up and down (as
opposed to moving the whole camera up and down).
Why: Like panning, to follow a subject or to show the top and bottom of a stationary
object. With a tilt, you can also show how tall something is.
Hand Held - Hand-held camera or hand-held shooting is a film and video technique in
which a camera is literally held in the camera-operator's hands--as opposed to being placed
on a tripod. The result is an image that is perceptibly shakier than that of a tripod-mounted
camera.
Why: Due to the spontaneity of the action, many news crews and most documentaries use
hand-held shooting techniques. Hand held shots might serve to help create a more urgent,
ʻrealʼ or dramatic feel to a shot.
Dolly -The camera physically follows the subject at a more or less constant distance. Why:
To follow an object smoothly.
Dolly Zoom - A technique in which the camera moves closer or further from the subject
while simultaneously adjusting the zoom angle to keep the subject the same size in the
frame.
Why: This combination of lens movement and camera movement creates an effect like no
other movement – It may create the illusion that the world is closingin on the subject, or
the opposite, the world and environment is expanding. Another possible ʻeffectʼ is as if the
subject is floating towards the camera.
Zoom - Technically this isn't a camera move, but a change in the lens focal length with gives
the illusion of moving the camera closer or further away.
Why: To bring objects at a distance closer to the lens, or to show size and perspective.
Pedestal - Moving the camera position vertically (up and down) with respect to the subject
(different than a tilt, the camera remains horizontal but moves vertically).
Why: You pedestal the camera up or down to get the proper height you prefer .
Crane shot - Basically, dolly-shots-in-the-air. A crane (or jib), is a large, heavy piece of
equipment, but is a useful way of moving a camera - it can move up, down, left, right,
swooping in on action or moving diagonally out of it.
Why: Gives a birdʼs eye view. It looks as if the camera is swooping down from above.
Follow -The camera physically follows the subject at a more or less constant distance.
Why: This movement provides the audience the characterʼs perspective throughout the
action and movement of the scene.
Aerial Shot - An exciting variation of a crane shot, usually taken from a helicopter or
cabling suspension system.
Why: This is often used at the beginning of a film, in order to establish setting and
movement.
Rack Focus - This is a lens movement, not a camera movement. Focus on one object, like
an actorʼs face, and have everything behind him/her out of focus. Then adjust the focus so
his/her face becomes blurred and the object behind the actor becomes clear.
Why: You are actually making a transition similar to an edit by constructing two distinct
shots. You often see the rack focus used to switch from one actor’s face to another during
conversation or tense moments. This literally focuses the attention on part of the screen,
or object/actor at a time.
Focus Effects:
Deep focus: Requires a small aperture and lots of light, means that the
foreground, middle ground and the background of the frame remain in focus.
It allows everything in the shot to be in focus at the same time.
Shallow focus: This type of focus is the opposite to deep focus. In this shot only
foreground is in the focus, but the background remains the blur.
It is a function of a narrow depth of field and it implies that only one plane of the
frame will remain sharp and clear (usually the foreground). It is typically a feature of
close-up.
Shifting focus: This type of focus shifts between two points. It can be used to
draw audience’s attention or to shift between 2 characters in dialogue.
Composition in photography is all about arranging the elements in your photos for
maximum impact.
Another tip: by having leading lines, you direct the viewer’s eyes to the main subject.
There are some compositions that are a delight for our eyes. For example, the
Fibonacci spiral appears in nature (shells, and certain plants). We are generally
attracted to faces which are more symmetrical (we see these as more ‘beautiful’).
It allows the viewer to focus on one aspect of an image without drifting away. Try
to have at least 1 subject that sticks out the most. That your eyes can rest upon,
without drifting away too much.
If in doubt, set your lens to a slightly wider angle and capture more of the
surroundings than you need. This gives you a bit of space to play with later
on, allowing you to crop or recompose the photo once you've had a chance
to examine it on your computer.
2. Rule of thirds: Imagine that your image is divided into 9 equal segments by 2
vertical and 2 horizontal lines. The rule of thirds says that you should position the
most important elements in your scene along these lines, or at the points where
they intersect.
Doing so will add balance and interest to your photo. Some cameras even offer
an option to superimpose a rule of thirds grid over the LCD screen, making it
even easier to use.
3. Emphasis:
Role of light in composition:
Lighting is a key factor in creating a successful image. Lighting determines not only
brightness and darkness, but also tone, mood and the atmosphere. Therefore it is
necessary to control and manipulate light correctly in order to get the best texture,
vibrancy of colour and luminosity on your subjects. By distributing shadow and
highlights accurately, you can create stylized professional looking photographs.
Positioning Light
The source your light is coming from has a huge impact on how it falls on your subject.
Light originating from behind the camera, and pointing directly onwards gives you very flat
lighting. It will also cause shadows to fall in the background of the image. Side lighting
produces a far more interesting light, as it shows the shape of the subject much more and
cast it in partial shadow giving it a more dramatic look. Rembrandt lighting is an effective
common example of this lighting type. Lighting sourced from the back of your subject gives
an alternative effect. This time most of the light is hitting the side of the subject making it
brighter, which creates a more distinctive and dramatic photo.
Shaping Light
Adding a diffuser to your light source can reduce glare and harsh shadows and also
diminishes blemishes on your subject. It gives your artificial light a softer more natural
looking result. You can diffuse light numerous ways. Using soft boxes, umbrellas and sheer
heatproof material work really well to achieve this result.
Manipulating Light
Light can be manipulated to fall on a particular area of interest on your subject. This can be
achieved through the use of diffusers and reflectors. Collapsible Reflectors shape sunlight or
bounce flash light with on area you’d prefer to highlight. Spot lights can also be covered in
light shapers that enable you to have more control over the direction the light will fall and
how broad the light spans.
Unit- 3
LIGHTING:
Effective lighting is the essence of cinematography. Often referred to as painting with light,
the art requires technical knowledge of film stocks, lighting instruments, color, and diffusion
filters, and an understanding of their underlying concepts: exposure, color theory, and optics.
Lighting Properties:
Any source of light can be described in terms of four unique and independently respective
properties:
• Intensity: Light can range from intense (sunlight) to subdued (match light). We measure
intensity in units called foot-candles, which define the amount of light generated by a candle
• Color: Light has a color balance, or bias, which is dependent on the source (daylight,
tungsten, etc.).
quality.
• Angle: The angle of the source, relative to the reflective object or subject, affects intensity
and quality.
that whatever you are filming, the subject is well illuminated in the shot. Key lights
should not be placed directly in front of the talent or subject, but instead slightly off
to the side. While just having this light may look like enough light, if you want a
well-lit piece, you’ll want to include the other two lights to provide the subject in
The fill light fills the dark side of your subject. The fill light allows you to control
the overall feel of your shot depending on how much you dim or lighten the fill
light. A dim fill light will give you more of a harsh, film-noir type of shadow, while
a having the light brighter will help give your subject a more even look.
You should always have a fill light in place even if you want a shadowy look to your
talent so that you are able to see a little detail on the dark side.
The third light is the back light. A back light will put another element to the image
of your talent and will push him or her off from the background, again adding another
dimension. For this all you need to do is place a light behind your subject pointed at
the back of their neck and high enough to be out of frame. Watch that you don’t have
the light too bright or the effect you get may not be the look you were going for.
Basically, there are three common camera angles used in the production of video:
Eye level
High angle
Low angle
Eye level is what’s almost always used in corporate video production. It’s considered
neutral and is more flattering than either low or high. It’s also very corporate, and
adds little drama to the shoot. It’s the kind of angle you see on the news and romantic
comedies.
High angle is shooting down on a person or having the camera significantly higher.
Wilde states that this can give the subject a weak or childlike look. This is used in the
Low Angle: On the other hand, having the camera significantly lower than the
subject and therefore “looking up” at the person creates an intimidating or foreboding
look.
“If you shoot from a low angle, the character looks like almost like a giant. The low
Gels: A gel is a transparent material that has color on it. Gels are used extensively
movie production. We can use these gels for color correctness or to add color to a
scene for dramatic effect. Gels are made of thin sheets of polyester or
polycarbonate. You place them directly in front of the lights. Gels will not last
you forever; they will fade or most of them will melt because of the intense heat
material that you place in front of a light to soften highlights and shadows. You
will also use a diffuser to reduce contrast and increase beam angle. Contrast refers
to the difference between one tone and another or between the darkest and
lightest parts of a scene. The light that comes through a diffuser is called diffused
light. The diffused light creates softer shadows than a hard uncovered light.
There are two types of reflectors. You use the first type for indoor lighting. This
reflector is bowl-shaped and can come in various sizes. You use this type of
reflector to shape and intensify the light’s beam. The second reflector type is for
outdoor use. You use these reflectors to redirect light. They are flat and colored in
Cutters: Block light. Photographers also call them siders or gobos. A flag is an
opaque panel used to block light and shadow the subject, camera lens or the
background. You can also use it to hide lights within a scene.
Key Lighting
The key light is the main light of a scene or subject. This means it’s normally the strongest
light in each scene or photo. Even if your lighting crew is going for a complicated multi-
light setup, the key light is usually the first to be set up.
However, just because it’s your “main” light doesn’t mean it always has to be facing your
subject. You can place your key light anywhere, even from the side or behind your subject to
create a darker mood. Just avoid placing it near or right beside the camera as this will create
flat and direct lighting for your subject.
Fill Lighting
As the name suggests, this technique is used to “fill in” and remove the dark, shadowy areas
that your key light creates. It is noticeably less intense and placed in the opposite direction
of the key light, so you can add more dimension to your scene.
Because the aim of fill lighting is to eliminate shadows, it’s advisable to place it a little
further and/or diffuse it with a reflector (placed around 3/4 opposite to the key light) to
create softer light that spreads out evenly. Many scenes do well with just the key and fill
studio lighting as they are enough to add noticeable depth and dimension to any object.
Backlighting
Backlighting is used to create a three-dimensional scene, which is why it is also the last to
be added in a three-point lighting setup. This also faces your subject—a little higher from
behind so as to separate your subject from the background.
As with fill lighting, you’ll want to also diffuse your backlight so it becomes less intense and
covers a wider area of your subject. For example, for subject mid-shots, you’ll want to also
light up the shoulders and base of the person’s neck instead of just the top of their head. This
technique can also be used on its own, without the key and fill lights if you’re aiming for a
silhouette.
Side Lighting
Needless to say, side lighting is for illuminating your scene from the side, parallel to your
subject. It is often used on its own or with just a faint fill light to give your scene a dramatic
mood or what’s referred to as “chiaroscuro” lighting. To really achieve this effect, your side
light should be strong so as to create strong contrast and low-key lighting that reveals the
texture and accentuates the contours of your subject.
When used with a fill light, it’s advisable to lessen the fill light’s intensity down to 1/8 of
that of the side light to keep the dramatic look and feel of a scene.
Practical Lighting
Practical lighting is the use of regular, working light sources like lamps, candles, or even the
TV. These are usually intentionally added in by the set designer or lighting crew to create a
cinematic night time scene. They may sometimes be used to also give off subtle lighting for
your subject.
However, practical lights are not always easy to work with, as candles and lamps are
typically not strong enough to light up a subject. A hidden, supplementary motivated light
(more on that later) may be used or dimmers can be installed in lamps so the light’s intensity
can be adjusted.
Bounce Lighting
Bounce lighting is about literally bouncing the light from a strong light source towards your
subject or scene using a reflector or any light-colored surface, such as walls and ceilings.
Doing so creates a bigger area of light that is more evenly spread out.
If executed properly, bounce lights can be used to create a much softer key, fill, top, side, or
backlighting, especially if you don’t have a diffuser or softbox
Soft Lighting
Soft lighting doesn’t refer to any lighting direction, but it’s a technique nonetheless.
Cinematographers make use of soft lighting (even when creating directional lighting with
the techniques above) for both aesthetic and situational reasons: to reduce or eliminate harsh
shadows, create drama, replicate subtle lighting coming from outside, or all of the above.
Hard Lighting
Hard light can be sunlight or a strong light source. It’s usually unwanted, but it certainly has
cinematic benefits. You can create hard lighting with direct sunlight or a small, powerful
light source.
Despite it creating harsh shadows, hard lighting is great for drawing attention to your main
subject or to an area of the scene, highlighting your subject’s contour, and creating a strong
silhouette.
High Key
High key refers to a style of lighting used to create a very bright scene that’s visually
shadow less, often close to overexposure. Lighting ratios are ignored so all light sources
would have pretty much the same intensity. This technique is used in many movies, TV
sitcoms, commercials, and music videos today, but it first became popular during the classic
Hollywood period in the 1930s and 40s.
Low Key
Being the opposite of high key, low key lighting for a scene would mean a lot of shadows
and possibly just one strong key light source. The focus is on the use of shadows and how it
creates mystery, suspense, or drama for a scene and character instead of on the use of
lighting, which makes it great for horror and thriller films.
Motivated Lighting
Motivated lighting is used to imitate a natural light source, such as sunlight, moonlight, and
street lamps at night. It’s also the kind of lighting that enhances practical lights, should the
director or cinematographer wish to customize the intensity or coverage of the latter using a
separate light source.
To ensure that your motivated lighting looks as natural as possible, several methods are
used, such as the use of filters to create window shadows and the use of colored gels to
replicate the warm, bright yellow light coming from the sun or the cool, faint bluish light
from the moon.
Ambient Lighting
Using artificial light sources is still the best way to create a well-lit scene that’s closely
similar to or even better than what we see in real life. However, there’s no reason not to
make use of ambient or available lights that already exist in your shooting location, may it
be sunlight, moonlight, street lamps, or even electric store signs.
When shooting during the day, you could always do it outdoors and make use of natural
sunlight (with or without a diffuser) and supplement the scene with a secondary light for
your subject (bounced or using a separate light source). Early in the morning and late in the
afternoon or early evening are great times for shooting outdoors if you want soft lighting.
The only downside is that the intensity and color of sunlight are not constant, so remember
to plan for the weather and sun placement.
The two most common outdoor lighting problems both involve shadows. When
shooting on a bright, sunny day, the sun will produce harsh shadows on your
subject. Aside from moving to a different location where the sun is not a factor,
two possible solutions to the problem are diffusion and fill light. Just as the
light produced by lighting instrument can be softened by the use of diffusion
material, so can the light from the sun be softened. Attach a sheet of
commercially available diffusion material to a me made out of plastic pipe.
Hold the frame above the subject, allowing lighting the sunlight to pass
through it. (Be careful to avoid casting a shadow from the frame onto your
subject.) The diffusion material will soften the light and create a soft modelling
effect on your subject.
An alternative solution is to use fill light to soften the shadows. A simpler way
to provide the fill light is to use a reflector. A white or silver reflector can be
used to reflect a softer or harder light back to the subject, filling in the shadows
created by the sun.
In film production, lip-synching is often part of the postproduction phase. Dubbing foreign-
language films and making animated characters appear to speak both require elaborate lip-
synching. Many video games make extensive use of lip-synced sound files to create an
immersive environment in which on-screen characters appear to be speaking. In the music
industry, lip-synching is used by singers for music videos, television and film appearances
and some types of live performances. Lip-syncing by singers can be controversial to fans
attending concert performances who expect to view a live performance
Human Voice: The human voice consists of sound made by a human being using the vocal
tract, such as talking, singing, laughing, crying, screaming, etc. The human voice frequency
is specifically a part of human sound production in which the vocal folds (vocal cords) are
the primary sound source. (Other sound production mechanisms produced from the same
general area of the body involve the production of unvoiced consonants, clicks, whistling
and whispering).
Generally speaking, the mechanism for generating the human voice can be subdivided into
three parts; the lungs, the vocal folds within the larynx (voice box), and the articulators.
Natural Sound: Ambience (also known as atmosphere or background) consists of the
sounds of a given location or space. Ambience is normally recorded in stereo by the sound
department during the production stage of filmmaking.
Every location has distinct and subtle sounds created by its environment. These sound
sources can include wildlife, wind, music, rain, running water, thunder, rustling leaves,
distant traffic, aircraft and machinery noise, the sound of distant human movement and
speech, creaks from thermal contraction, air conditioning and plumbing noises, fan and
motor noises, and harmonics of mains power.
TYPES OF SOUND
Sound effects: Sound effects are artificially created or enhanced sounds, or sound processes
used to emphasize artistic or other content of films, television shows, live performance,
animation, video games, music, or other media. In motion picture and television production,
a sound effect is a sound recorded and presented to make a specific storytelling or creative
point without the use of dialogue or music.
The term often refers to a process applied to a recording, without necessarily referring to the
recording itself. In professional motion picture and television production, dialogue, music,
and sound effects recordings are treated as separate elements. Dialogue and music
recordings are never referred to as sound effects, even though the processes applied to such
as reverberation or flanging effects, often are called "sound effects".
Music: Music is an art form and cultural activity whose medium is sound organized in time.
The activities describing music as an art form or cultural activity include the creation of
works of music (songs, tunes, symphonies, and so on), the criticism of music, the study of
the history of music, and the aesthetic examination of music.
The common elements of music are pitch (which governs melody and harmony), rhythm,
dynamics (loudness and softness), and the sonic qualities of timbre and texture. Music is
performed with a vast range of instruments and vocal techniques ranging from singing to
rapping; there are solely instrumental pieces, solely vocal pieces (such as songs without
instrumental accompaniment) and pieces that combine singing and instruments.
In any kind of documentary or informational program, basic decisions about the structure of
the audio portion of the program centre on the presence or absence of a narrator or host.
Some documentarists prefer to let the story tell itself and are therefore resistant to the use of
a narrator. Whether or not to use a narrator is a matter of both your personal preference and
the goals of the program. A narrator might be an unnecessary intrusion into a program
designed as an ethnographic study of people in a particular community. On the other hand, a
program on the local pasta factory might be difficult to complete without a narrator to
explain the technical elements of the factory’s operation and to link together the program’s
various segments. Narration serves many useful functions. It can quickly provide
expositional details that might take considerable time to develop if left to emerge from the
comments made by the participants in the program. Also, narration can effectively present
technical information in nontechnical terms. Often, experts on a particular subject present
information that is too detailed or complex. Narration is an effective way of providing
transitions or bridges from one segment of the program to another. In addition, narration is
often used to introduce speakers as they appear within a program. This is an effective and
economical technique that eliminates the need for intrusive name superimpositions.
One of the first decisions that must be made when designing an interview-based program is
whether the interviewer’s questions will be incorporated into the final program. If the
questions will be used, the interviewer must be miked. Furthermore, if you plan to include
the questions, you must decide whether the interviewer will appear on camera or whether the
questions will be asked from off camera.
In a single-camera shoot, on-camera questions are typically recorded after the interview is
completed. This technique is called shooting question re-asks.
Question re-asks should be shot immediately after the interview is completed and in the
same location in which the interview was recorded to guarantee that the sound quality of the
questions matches the sound quality of the answers. The presence of consistent ambient
room noise is particularly important in determining that the sound quality matches.
In addition to shooting the question re-asks in the same location, many videographers make
it a practice to record some additional room tone—the ambient sound present in the room.
Sometimes this is called wild sound. If a question needs to be reconstructed, recorded, and
inserted into the program later on, room tone can be dubbed in under the question so that it
will not sound like the question was asked at a different time or in a different place.
NATURAL SOUND
MUSIC & SOUND EFFECTS
MAKING A TRACK CHART
Most videotape recording formats provide at least two tracks for recording sound in the
field, but the number of tracks available during postproduction may be significantly larger.
In nonlinear video editing systems and digital audio workstations designed for
postproduction, there commonly will be 4, 6, 8, 16, or more available tracks. Even in a fairly
simple video production with VO, SOT, natural sound, and music, you may find that you
need four audio tracks to accommodate all your sound sources during postproduction.
An audio track chart—simply a log of each of the sound sources arranged along a time line
of where they occur in the program—is a useful device to help plan and organize your
production. First describe each of the video shots in sequence and then assign each of the
sound sources to a different audio track in the time line. Be consistent about where you place
similar sources. For example, in a four-track plan you may want to keep principal voice
segments (VO and SOT) in track 1; natural sound and/or sound effects in track 2; and stereo
music in tracks 3 and 4 (3 = left music channel, 4 = right music channel). When the
production is complete, the multi4
“{)track version of the program can be mixed down to two tracks on the program master.
Location & studio recording for video
1. Bedroom Studio – which is typically a small setup next to your bedside, and is the
absolute minimum you need to record sound into your computer.
2. Dedicated Home Studio – which is typically a room in your house used solely for
recording that includes both studio furniture, and acoustic treatment.
3. Semi-Pro Studio – This can be either at your home or a different location and typically
includes the equipment necessary to record multiple musicians simultaneously.
4. Pro Studio – This is typically located at a commercial facility, and includes whatever tools
necessary to produce professional results in the most efficient way possible.
MICROPHONE:
Camera Mounted microphone
The camera microphone is a standard feature of many portable video cameras and
camcorders. In inexpensive consumer camcorders, the microphone is commonly built into
the camcorder. In industrial and professional-quality equipment, the microphone generally is
attached to the camera but can be removed. In all these systems, the camera microphone can
be used to record the audio simultaneously with the recording of the picture. Indeed, these
are one of the great advantages of videotape over film—not only are picture and sound
recorded simultaneously but also they can be played back as soon as the recording has been
completed.
External microphones
External microphones are any microphones that are not built into or mounted onto the field
camera. Once the field producer decides which type of microphone is best suited to the field
recording situation at hand, a decision needs to be made about where to position the
microphone and how to place it in that position. Microphones can be handheld, pinned onto
the performer’s clothes, hidden on the set, hung from the ceiling, supported on booms off
camera, or attached directly to the object making the sound.
Wired microphones
These are widely used in field recording because of their ease of operation and reliability.
After connecting a cable to the microphone and the appropriate audio input on the
camcorder, you can begin recording.
Although this type of recording arrangement works well in most situations, the presence of
the microphone cable sometimes causes problems. If the subject moves around a lot or is at
a great distance from the camcorder, or if the presence of microphone cables will spoil the
appearance of the event, you should use another method of transmitting the signal from the
microphone to the camcorder.
Wireless Microphones
It is also called radio microphones—eliminates many of the problems associated with the
use of microphone cables; therefore, they are extremely popular in field production. A
wireless microphone sends its signal to a receiver via RF (radio frequency) transmission
rather than through a cable. That is, it actually broadcasts the audio signal to the receiving
station and thereby eliminates the need to use long cables to connect these two points.
Wireless microphones contain three components: the microphone itself, a small transmitter
attached to the microphone that transmits the signal, and a small receiver that receives the
transmitted signal. The output of the receiver is then connected by cable to the appropriate
audio input on the camcorder.
In extreme cases of interference, the audio on pre-recorded field tapes can be manipulated to
improve sound quality. Two common types of sound manipulation involve the use of filters
or equalizers. Audio filters allow you to cut out certain parts of the high or low end of the
audio signal. The filter works by blocking out the parts of the signal above or below a
specified cut-off frequency. Notch filters can be used to eliminate a particular range of
frequencies in the signal.
Equalizers are similar to filters but are somewhat more complex. Equalizers break down the
audio signal into a series of equally wide ranges of frequencies. The level (gain) of the
signal can then be increased, decreased, or left unchanged in each of the intervals. By
increasing the level of some of the intervals and decreasing the level of others, the overall
quality of the sound can be changed.
Filters and equalizers can also be used to achieve a particular kind of production effect. For
example, the tinny sound of an inexpensive portable radio can be achieved by using a low-
frequency filter to cut out the rich low-frequency part of the signal, allowing only the thin
high frequencies to pass. A normal audio signal can thus be manipulated to make it sound
like a radio, telephone, loudspeaker, and so on.
It is extremely important to monitor the sound level and quality when performing an audio
dub. If background music is to be dubbed in, the music should not be so loud that it
interferes with the quality of the voice on the other channel. The proper level can be set by
watching the VU meters and by monitoring the sound with headphones. The quality of the
mix can be monitored by listening to the mixed output of both audio channels. Simply set
the audio monitor selector switch to “mix” (rather than to either channel 1 or 2), plug in the
headphones, and listen. Adjust the level of the music so that it takes its proper place in the
background, behind the voice.
UNIT- V
Introduction to editing
Editing is the process of arranging individual shots or sequences into an appropriate order.
The appropriate order is determined by the information the editor wants to communicate and
the impact the editor wants to achieve with the material. The process of editing includes
making a series of aesthetic judgments, through which the editor decides how the piece
should look, and performing a series of technical operations to carry out the editing
decisions.
Editing is an invisible art. When it is done well, it is hardly noticed—yet almost every visual
message in television and film has been edited.
If we define editing as the process of selecting and ordering shots, we can identify two kinds
of video editing: editing done during a program’s production and editing done after the
program has been videotaped. This latter type of editing is called postproduction editing.
The autonomous creative editor is an individual with significant responsibility for making
and executing editing decisions. The creative editor must understand both the aesthetic
principles of editing as well as how to operate the video editing equipment. Working in a
variety of production situations, the creative editor may be given a brief story outline and a
dozen cassettes of field tapes or a set of video files on a computer hard drive from which to
edit a story segment that conforms to the general conventions of the program. Here, the
editor has an incredible amount of creative freedom. Decisions about the use of music,
sound effects, sound bites, shot sequences, and even the structure of the segment may be left
to the editor’s discretion. However, the segment or program producer or director usually
retains veto power.
The subordinate technical editor is usually (but not always) primarily a technician or
engineer who is thoroughly familiar with the operation of the hardware and software in the
editing system. The technical editor executes editing decisions made by someone else.
Creative control of editing decisions is retained by the producer or director of the program,
and the technical editor performs the edits as they have been determined by the individuals
with creative control over the program.
The principal difference between creative and technical editors lies in the location of
creative control over editing decisions. The autonomous creative editor has such control,
whereas the subordinate technical editor usually does not. However, most good subordinate
technical editors make suggestions about the aesthetics of the edit and, for that reason they
too are partners in the creative editing process.
Modern digital nonlinear editing systems not only allow audio and video to be edited
quickly and efficiently but also allow for easier integration of graphics and special effects
than the tape-based systems they are replacing.
Editing, graphics, and effects software are installed on the same computer system, and one
person has the responsibility of performing editing, graphics, and effects tasks. The technical
editors of old are being replaced quickly by more creative editing personnel, who are as
familiar with graphic and sound design as they are with video picture editing.
TRANSITIONS
For those video producers with simple linear videotape editing systems, the cut is the only
visual transition possible from shot to shot. The most frequently used transition in video and
film, the cut is an instantaneous change from one shot to another. It approximates the effect
achieved by blinking, without leaving a blank or black space between shots.
For the producer with access to a nonlinear editing system, a number of other transitions are
available.
A fade is a gradual transition from black to an image or from an image to black. Fades are
usually used at the beginning and end of a program; thus, we have the terms fade in and fade
out. However, fades are also used within a program. A fade signals a break in continuity of
the visual message. Fades are used to insulate the program material from a commercial, to
signal to the audience that an event or episode has ended, that time has passed, and so on.
The dissolve is similar to the fade except it involves two visual sources. One gradually fades
out as the other fades in, and the two sources overlap during the transition. The effect is one
image changing into an-other. Dissolves, once widely used to signal passage of time, now
are more often used to show the relationship between images, particularly structurally
related images. A dissolve from a photograph of a young man to another photograph of the
same man in old age not only clearly shows the passage of time but also represents the
metamorphosis of one image into another.
A wipe is a transition in which one screen image is replaced by another. The second image
cuts a hard- or soft-edged pattern into the frame as the transition takes place. In nonlinear
editing systems, wipe patterns are selected from the transitions menu. All nonlinear editing
system transition menus contain a standard array of wipe patterns: circles, squares,
diagonals, diamonds, and so on.