0% found this document useful (0 votes)
628 views145 pages

Complete Course Notes of 205 Lighting in Video

The document provides an overview of the basics of video cameras, including their history and components. It discusses how video cameras capture and record moving images, and how they have evolved from large professional equipment to smaller consumer devices. The key parts of a video camera are described, such as the viewfinder, image stabilization features, external microphone port, optical and digital zoom, and manual controls. Functions of these parts like improving image quality and audio are also noted.

Uploaded by

arpit anand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
628 views145 pages

Complete Course Notes of 205 Lighting in Video

The document provides an overview of the basics of video cameras, including their history and components. It discusses how video cameras capture and record moving images, and how they have evolved from large professional equipment to smaller consumer devices. The key parts of a video camera are described, such as the viewfinder, image stabilization features, external microphone port, optical and digital zoom, and manual controls. Functions of these parts like improving image quality and audio are also noted.

Uploaded by

arpit anand
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 145

Notes of Complete course – 205 BAJMC

BASICS OF VIDEO CAMERA, LIGHTS AND


SOUND
Complete Course of Basics of Video Camera,
Composition and Types of Shots, Lighting
and Sound
Course Code – 205 (BA JMC)
Unit – I: Introduction to Video Camera:
Introduction
A video camera is an optical instrument that captures videos (as opposed
to a movie camera, which records images on film). Video cameras were
initially developed for the television industry but have since become widely
used for a variety of other purposes.
Video cameras are used primarily in two modes. The first, characteristic of
much early broadcasting, is live television, where the camera feeds real
time images directly to a screen for immediate observation. A few cameras
still serve live television production, but most live connections are
for security, military/tactical, and industrial operations where surreptitious or
remote viewing is required. In the second mode the images are recorded to
a storage device for archiving or further processing; for many years,
videotape was the primary format used for this purpose, but was gradually
supplanted by optical disc, hard disk, and then flash memory. Recorded
video is used in television production, and more often surveillance and
monitoring tasks in which unattended recording of a situation is required for
later analysis.
In 1891, Thomas Edison's employer William Kennedy Laurie Dickson
invented the first movie camera called the Kinetograph.
A video camera is camera used to make electronic motion pictures. It
captures moving images and synchronous sound. Early video cameras
were all analog and most modern ones are digital. Analog video cameras
produce signals that can be displayed with analog televisions. The signals
can be shown at the time, or can be stored in an analog format on
magnetic tape. Digital video cameras produce digital images.
Video cameras were invented early in the 20th century for television use,
and by the end of the century, people could buy digital video cameras,
which can almost immediately display the image. Video recorders that
could record the image on magnetic tape were created in the middle 20th
century.
At first, video cameras were large and expensive.
Only professionals operated them. As the electronics industry advanced,
and solid state circuits with transistors and microprocessors replaced
vacuum tubes, video cameras became smaller and inexpensive. Now
many mobile phones and other consumer electronic devices include video
cameras. In addition, software is now widely available to edit or to
compress the output from video cameras.

A video conveys huge amounts of information in a short time. You can


say more in a shorter amount of time on video as compared to text. Video
is more engaging to the senses, so it can convey more information by
showing and telling at the same time.

Parts of Video Camera and Functions

The Viewfinder
As technology progresses, tech gets smaller and cheaper – and
camcorders are no exception. When battling on price point – trying to make
the cheapest possible product – manufacturers start looking at where they
can cut costs. With camcorders, this lead to viewfinders no longer included
and relying on LCD screens instead.
Don’t make this mistake. LCDs are great, but they have serious limitations.
If you’re filming outdoors in sunny conditions, forget about trying to use
your LCD, and regardless of conditions, it’s difficult to keep your camera
steady when using the screen instead of a viewfinder. Holding a camera
steady against your eye is just much easier, to say nothing of the way that
LCD screens gobble up battery power.
That’s not to say that LCD screens are bad; they offer some serious
advantages. Reviewing footage, navigating menus, and allowing you to
shoot at different angles; they have a lot to offer.
So get the best of both worlds, and pick up a model with both.

Image Stabilization
Even when shooting through a viewfinder, you’re not going to be a stable
filming platform; shaky hands are an inevitability. Image stabilizers are
meant to counteract this issue. There are two primary types of stabilizers;
optical, and digital.
An optical image stabilizer is built right into the camera’s lens, its sensors
counteracting the myriad small movements that hand held filming
produces. Digital stabilizers, on the other hand, work by centering your
image during recording, which tends to reduce your resolution.
This is a tricky situation, as image stabilizers can vary greatly in efficacy,
even in top end models. This is where it pays to do your homework; read
user reviews, and figure out which image stabilizers are getting people real
results.

An External Microphone Port


Your camcorder has an internal microphone, it’s true, and the quality of the
audio is going to be dreadful. That’s not a knock – inside a camera is a
terrible place for sound capture to occur, and tiny microphones simply can’t
keep pace with their larger siblings.
So make sure you’re able to use an external mic.
Nothing says “amateur hour” like static laden, fuzzy audio. Using an
external microphone can drastically improve the quality of your videos. So
make sure your unit has a mic-in port, a stereo jack, or some other way to
access an external microphone.

Optical Zoom
“Mr. Deville, I’m ready for my close up.” Not if you’re relying on digital
zoom, you’re not.
Camcorders and cameras, in general, have two kinds of zoom; digital, and
optical. Digital zoom is like enlarging a web page; it makes the image
bigger, sure, but also blocky, fuzzy, and distorted. Optical zoom, on the
other hand, is the same type used in SLR cameras, where the lens moves
to highlight and focus on an image.
There’s no debate; if you’re going to zoom, you want to use an optical
zoom if you have any desire to see what you’re filming. Which is kind of the
point of shooting in the first place, right?

Manual Controls
Your camcorder has default settings for how you capture video, and those
are great. But you’re going to want to change them now and then, and for
that, you’ll need manual controls. Focus, shutter speed, exposure, and
especially white balance – these are integral to getting a clear, crisp image.
White balance is necessary for capturing color accurately, and adjusting
focus and shutter speed lets you adjust how much light hits the lens –
something that is incredibly useful when the camera can’t figure it out on its
own.
Without manual controls, you’re stuck with whatever your camera can
come up with on its own; with them, you can always ensure that you’re able
to film.

Parts of a Camera and Their Functions


The basic parts that make up a Digital Camera along with their functions.

 Aperture
 Lens
 Shutter Release Button
 Memory Card
 Viewfinder
 User Controls

Aperture of a Digital Camera


The Aperture is the part which opens and closes through which light
passes to enter the Camera. It is defined as a camera control that will allow
you to choose how much you want to open the lens. Once the lens is set to
the desired opening, the Exposure Meter will adjust automatically.
The Meter will then determine the closing speed of the shutter. This
procedure happens every time we click, to take a picture. Technically, it is
defined as the ratio of focal length to effective Aperture diameter. Lens has
f-stops/number markings and greater Aperture on a lens means a lower f-
stops and vice versa.
Lens of Digital Camera
Lens also called photographic/ optical lens is attached to the Camera body.
It helps in capturing an object and storing it. Lenses are available in
different types that include standard zoom, telephoto zoom, fisheye, macro,
wide-angle etc.
The type of lens that you choose can have determining effects on the
picture that you want to take. Some lenses can take detailed photos from
far away, while others can capture extreme detail from close by. There are
even lenses that can give you distorted images. The conclusion is that, the
better the lens, better is the quality of the picture.
Shutter Release Button in Camera
The bright red button as shown in the Fig. 4, is Shutter Release Button and
not all shutters are like that. Some Shutter Release Buttons bulge out of the
body of the camera. Others are made close to the body of the camera. This
gives the camera a sleek, smooth-looking top.

If you are taking a picture, you must press the Shutter Release Button to
take the picture. When the button is pressed, the Exposure Meter will
determine the appropriate setting of the lens. This means that the release
button will open and close the shutter in the proper time enough to allow
the light to enter.
Memory Card in Camera
Memory Cards come in different sizes. The size is determined by the
memory capacity that the card can take. Some of them can be tiny in size
(Micro SD Card) and as yet they have huge memories that are measured in
Terabytes. Other Memory Cards have gigabytes of memory. Memory-
Cards should be kept in a memory card case to protect the data. Dust and
weather conditions could damage a Memory Card.

Viewfinder in Camera
Viewfinder is the component that displays the image to be shot. When the
device is held to the photographer’s eye; he tries to find a spot to focus on,
through Viewfinder. Some have an LCD screen and more often than not,
the LCD acts as the Viewfinder. Getting a good and sharp photograph
depends on factors like Camera’s Megapixels, light falling on the object,
quality and type of lens.

Parts of a Camera
 Shutter.
 Image Sensor- The Most Important Part of a Camera.
 Viewfinder.
 Digital LCD Display.
 Button Interface.
 Inbuilt Flash.
 Shutter Trigger.
 Mode Dial.

Parts of a Camera
1. Aperture

Aperture is the opening in front of the camera. It will be present in the lens
part.
For, an interchangeable lens camera, you will have the option to change
the lens. So, you will have more options with the Aperture.
But, for a standard point and shoot or bridge camera, the lens is a fixed
one. So, the options are limited. You can vary the Aperture in lens from the
camera body.
2. Shutter

The shutter is another vital part of a camera. It controls the time duration for
which the image sensor is exposed to the light.
Most of the digital cameras come with a combination of electronic and
mechanical shutter or mechanical alone.
All digital cameras are designed for a specific shutter life, also known as
the camera shutter count. The reliable shutter operation is guaranteed
only up to this value. The top-end models will come with a higher shutter
count.
3. Image Sensor- The Most Important Part of a Camera

It is the image sensor that decides the image resolution. So, it is like the
heart of the camera.
In the early days, the film used to do this job. Now, it got replaced by CCD
and CMOS sensors.
They are responsible for acquiring each of the pixels in an image. An image
sensor is quantified based on its size and number of megapixels.
4. Viewfinder

The viewfinder is the small rectangular opening, seen on top of the camera.
You can see through this window to compose and frame the shot.
Digital cameras either have an optical viewfinder or an electronic
viewfinder. The viewfinder also shows parameters like exposure, shutter
speed, Aperture, ISO, and a few other basic settings for image capture.
5. Digital LCD Display
All digital cameras will have an LCD to view images and to set the different
parameters and modes.
It is the visual interface that helps the photographer to set the camera
settings according to his choice. It is on the backside of the camera.
Some high-end models come with dual displays. The secondary display will
be on the top side.
6. Button Interface
You can find many buttons that are configured to do certain operations on
the backside and top of the body. Some cameras allow you to configure
some buttons according to your choice.
7. Inbuilt Flash

More than 90% of digital cameras will come with an inbuilt flash. It will be
on the top side. It will pop up only when you enable flash in the settings or
manually.
You will not get the same performance as an external flash. It will also
consume a good amount of battery power, especially for point and shoot
ones.
8. Shutter Trigger
Shutter trigger is a kind of tactile push-button switch which comes with dual
press option. The first press, which is referred to as the half click, is to
acquire the focus on the subject.
The second press, which is the full press is to activate the shutter
mechanism.
Some cameras allow you to separate the half-press Autofocus feature from
this button. You can configure the button on the backside for focusing. So,
the shutter button only activates the shutter alone in this condition.
The shutter trigger button is located on the top right-hand side of the
camera for usability.
9. Mode Dial
The Mode dial is another part of a camera used to change different modes.
Some of the standard Modes include Aperture mode, Shutter mode,
Manual mode, and Auto mode. It is located on the top side.
10. Hotshoe
Hotshoe is another integral part of a digital camera. It is on the top side of
all cameras.
It is mainly for mounting the external flash. You can also use it to mount
wireless triggers, external microphone, and spirit bubble level.
This Hotshoe mount varies for different camera manufacturers. So, you
cannot use one model of external flash on all bodies.

11. Communication Ports

Communication ports are usually on either side of the camera. USB is the
most common type of communication port, present in all models. It is for
image transfer from the camera to the computer.
Other communication interfaces include HDMI port, Audio port, Ethernet,
Wired remote trigger port, and Display port. These ports may not be
present in all models.
Bluetooth, Wifi, and NFC are some of the wireless communication
interfaces supported by a camera. You need to refer to the camera manual
to check the different types of communication interfaces.
12. Recording Medium
In digital cameras, the memory card is the photo storage medium. The type
of memory card varies with different types of cameras. There will be a card
slot located on the side or bottom to insert the memory card. Some
cameras come with dual memory card slots.
SD card is the commonly supported memory card for most of the digital
cameras. Compact Flash card, Micro SD card, XQD card, C Fast card, or
some of the other memory cards used in DSLR and mirrorless digital
cameras.

13. Battery and Battery Compartment

All digital camera needs a battery for its operation. The type of battery
varies for different camera types. Most cameras use Lightweight
rechargeable Lithium polymer batteries. It will be a custom one, supplied
along with the digital camera.
Some point and shoot models use alkaline batteries. The battery
compartment is usually at the bottom or side of the camera.
14. Tripod Mount
All Digital cameras will come with a tripod mount, located at the bottom
side. It allows you to mount the camera on a tripod.
Most of the cameras will have ¼ 20 UNC thread. Some come with a 3/8 16
UNC thread. So, check the manual to know the right tripod thread size.

ISO
ISO is your camera's sensitivity to light as it pertains to either film
or a digital sensor. A lower ISO value means less sensitivity to
light, while a higher ISO means more sensitivity.

Description: ISO, which stands for International Standards


Organization, is the sensitivity to light as pertains to either film or
a digital sensor. ISO is one of the three legs of the exposure
triangle used to make sense of what goes into determining an
exposure. The other two legs are aperture and shutter speed.
ISO Speed refers to your camera sensor's sensitivity to light. The
higher the ISO speed, the more light-sensitive it is. What this
means is that you can use a quicker shutter speed, which is
useful in sports photography and low light, or a smaller aperture,
for where you want a wide depth of field.
Types of Video Camera, Equipment and Accessories
What are the types of cameras?

Different Types of Cameras


 Compact Cameras
 Action (Adventure) Cameras
 Medium Format Cameras
 Traditional Film Cameras
 DSLR Video Cameras.
 Mirrorless Video Cameras.
 Point-And-Shoot Video Cameras.
 Professional-Grade Film Cameras.
 Sports and Action Video Cameras.
 360-Degree Video Cameras.
 Digital Camcorders.

Broadcast Standard:

In the United States, Standards and Practices (also referred to as


Broadcast Standards and Practices or BS&P for short) is the name
traditionally given to the department at a television network which is
responsible for the moral, ethical, and legal implications of the
program that network airs.
Broadcast television systems are the encoding or formatting systems for
the transmission and reception of terrestrial television signals.

The major analog TV standards are NTSC, PAL and SECAM. The video
signals consist of one luma signal and two chroma signals. Luma contains
information about black and white video. Chroma contains additional
information for black and white video to be converted to color video.

In, India, PAL video format is supported. NTSC is the video standard
commonly used in North America and most of South America. PAL is the
video standard which is popular in most of the European and Asian
countries.
Television Standards/Broadcast Standards
There are a number of TV Standards worldwide. Not all television sets in
the world are alike. Countries use one of the three main video standards –
PAL, NTSC or SECAM. What this means is that a video from a PAL
country will not play in a country that uses the NTSC standard.
Frames
Before we dive deep into the various TV Standards we shall take a look at
a few basics of TV transmission. A television transmission consists of a set
of rapidly changing pictures to provide an illusion of continuous moving
picture to the viewer. The pictures need to come at a rate of 20 pictures per
second to create this illusion. Each of these "rapidly changing" pictures is a
frame. A typical TV transmission is at 25-30 frames per second (fps).

Lines
Each frame consists of several closely spaced lines. The lines are scanned
from left to right and from top to left. A typical TV picture consists of 525 to
625 lines. Considering this large number of lines, if all were to be written
one after another the picture would begin to fade at the top by the time the
last line is written. To avoid this, the first frame carries the odd numbered
lines and the next frame carries the even numbered lines. This provides
uniformity in the picture and this is called interlacing.

Timing
TV receivers require a source to time the rapid succession of frames on the
screen. Designers decided to use the Mains power supply frequency as
this source for two good reasons. The first was that with the older type of
power supply, you would get rolling hum bars on the TV picture if the mains
supply and power source were not at exactly the same frequency. The
second was that the TV studio lights or for that matter all fluorescent, non
incandescent lights flicker at the mains frequency. Since this flicker is much
higher than 16 times per second the eye does not detect it. However this
flicker could evolve into an extremely pronounced low frequency flicker on
TV screens due to a "beat" frequency generated between the light flicker
and the mains frequency. This would have made programmes un-viewable
particularly in the early days of development of TV receivers.
The two mains power frequencies worldwide are 50Hz and 60Hz. This
meant that there was an immediate division in the TV standards - the one
with 25 frames per second (50 Hz) and 30 frames per second (60 Hz).
Most of the compatibility problems between TV standards across the world
stem from this basic difference in frequencies.
NTSC (National Television Standards Committee)
The majority of 60Hz based countries use a technique known as NTSC
originally developed in the United States by a focus committee called the
National Television Standards Committee. NTSC (often funnily referred to
as Never Twice the Same Colour) works perfectly in a video or closed
circuit environment but can exhibit problems of varying colour when used in
a broadcast environment.
PAL (Phase Alternate Lines)
This hue change problem is caused by shifts in the colour sub-carrier
phase of the signal. A modified version of NTSC soon appeared which
differed mainly in that the sub-carrier phase was reversed on each second
line; this is known as PAL, standing for Phase Alternate Lines (it has a wide
range of funny acronyms including Pictures At Last, Pay for Added Luxury
etc). PAL has been adopted by a few 60Hz countries, most notably Brazil.
SECAM
Amongst the countries based on 50Hz systems, PAL has been the most
widely adopted. PAL is not the only colour system in widespread use with
50Hz; the French designed a system of their own -primarily for political
reasons to protect their domestic manufacturing companies - which is
known as SECAM, standing for Sequential Couleur Avec Memoire. The
most common facetious acronym is System Essentially Contrary to
American Method.
SECAM ON PAL
Some Satellite TV transmissions (usually Russian) that are available over
India, are in SECAM Since the field (25 frames /sec) and scan rates are
identical, a SECAM signal will replay in B&W on a PAL TV and vice versa.
However, transmission frequencies and encoding differences make
equipment incompatible from a broadcast viewpoint. For the same reason,
system converters between PAL and SECAM, while often difficult to find,
are reasonably cheap. In Europe, a few Direct Satellite Broadcasting
services use a system called D-MAC. Its use is not wide-spread at present
and it is trans-coded to PAL or SECAM to permit video recording of its
signals. It includes features for 16:9 (widescreen) aspect ratio
transmissions and an eventual migration path to Europe's proposed HDTV
standard. There are other MAC-based standards in use around the world
including B-MAC in Australia and B-MAC60 on some private networks in
the USA. There is also a second European variant called D2-MAC which
supports additional audio channels making transmitted signals
incompatible, but not baseband signals.
Quick Facts:
 NTSC and PAL are video standards that are recorded on the cassette.
These videos send and electronic signal to the television, then only it can
be viewed.
 In, India, PAL video format is supported.
 NTSC is the video standard commonly used in North America and most
of South America.
 PAL is the video standard which is popular in most of the European and
Asian countries.
 The difference between NTSC and PAL is the transmission of number of
frames per second. In NTSC, 30 frames are transmitted per second. Each
frame is constituted up of 525 scan lines.
 In PAL, 25 frames are transmitted per second. Each frame consists of
625 scan lines.
 Second, the power frequency used in NTSC is 60 Hz. While in PAL, the
power frequency is 50 HZ.

Lenses and Filters:


TYPES OF CAMERA LENSES

Camera lenses for photo and video


There are various different types of camera lenses. That’s why it’s good a
camera lens guide like this can help you buy or rent the best lenses.
We'll be covering the basic types of lenses but many lenses can
simultaneously be two different types. For example:
 A prime lens can also be a standard lens.
 Zoom lenses can also be parfocal lenses.
 Long-focus lenses can also be telephoto lenses.
Different lenses work best for different situations, and this isn’t limited to
photo lenses or video lenses. The image properties for both are based on
the quality of the lens and the Focal Length.
Before we jump into the specifics of these types of camera lenses, we'll let
D4Darious give us a crash course in this video.

Also, the types of lenses available for DSLR cameras or mirrorless


cameras are based around the lens mount, and not the focal length or lens
capabilities.
Camera Control and Adjustment:
1. Aperture Control
Aperture means 'hole', 'gap' or 'opening' and it lets light through your lens
to your sensor. A large aperture will let in lots of light and produce a
shallow depth-of-field; a small aperture restricts light and brings more of
your scene into focus.

What is aperture, and what does it do?


Aperture is a hole in the lens that controls how much light gets into your
camera. It’s one important element of the exposure triangle, along with ISO
and shutter speed. Aperture also affects your depth of field, which is
defined by the level of clarity or blurriness of certain elements within a
photo.
Aperture adjustments affect the depth of field for your photos — the
range between the nearest and farthest objects in focus within a
picture. Shallow depth of field, which blurs the background to help pop the
in-focus subject of the photo, is achieved with a wide aperture.
The f-stop number, or f-number, is the setting that controls the size of the
aperture and therefore how much light can pass through the camera lens.
F-numbers are determined by the ratio of the diameter of the aperture to
the focal length of a lens.

2. Depth of Field
The depth of field (DOF) is the distance between the nearest and the
furthest objects that are in acceptably sharp focus in an image
captured with a camera.

Depth of field (DoF) is the distance between the nearest and furthest
elements in a scene that appear to be "acceptably sharp" in an image.
The distance between the camera and the first element that is considered
to be acceptably sharp is called DoF near limit.

In simplest terms, depth of field is how much of your image is in focus. In


more technical terms, depth of field is the distance in an image where
objects appear “acceptably in focus” or have a level of “acceptable
sharpness.”

Why is depth of field important?

One of the key factors of a beautifully composed image depends on the


nature of focus on the primary subject within the frame. Let us say we have
to capture the close-up of a character but the background is not impressive
at all. Or the filmmaker wants to keep the background out of focus so that
he/she can infuse the character with psychological depth. In this case,
creating a shallow depth of field would heighten the aesthetic appeal of the
image.
3. Depth of focus
Depth of focus is a lens optics concept that measures the tolerance
of placement of the image plane (the film plane in a camera) in
relation to the lens. In a camera, depth of focus indicates the tolerance
of the film's displacement within the camera and is therefore sometimes
referred to as "lens-to-film tolerance".

Why is Depth of Field important?


Depth of Field can influence your image and change the meaning and
intention. For example, you can selectively isolate a subject from its
background by having a narrow Depth of Field. You can also have
everything from the foreground to infinity in focus, ensuring you have a
sharp image.

4. Focal Length
It is not a measurement of the actual length of a lens, but a calculation
of an optical distance from the point where light rays converge to
form a sharp image of an object to the digital sensor or 35mm film
at the focal plane in the camera. The focal length of a lens is
determined when the lens is focused at infinity.

What is Lens Focal Length


Focal length, usually represented in millimeters (mm), is the basic
description of a photographic lens. It is not a measurement of the actual
length of a lens, but a calculation of an optical distance from the point
where light rays converge to form a sharp image of an object to the digital
sensor or 35mm film at the focal plane in the camera. The focal length of a
lens is determined when the lens is focused at infinity.
Lens focal length tells us the angle of view—how much of the scene will
be captured—and the magnification—how large individual elements will
be. The longer the focal length, the narrower the angle of view and the
higher the magnification. The shorter the focal length, the wider the angle
of view and the lower the magnification.

5. Aspect Ratio
Aspect ratio is the proportional relationship of the width of a video
image compared to its height. It is usually expressed as width:height
(separated by a colon), such as 16:9 or 4:3. The aspect ratio sets how wide
a video is formatted and affects how it will fit on your viewing screen.

the 16:9 aspect ratio can fit more information horizontally, while the
4:3 aspect ratio has more space vertically. Because of these
characteristics, they're each used for different purposes. Typically, most
videos have a 16:9 ratio, and the 4:3 ratio is best for photos!
What is film aspect ratio?
Aspect ratio is a numerical formula that describes the relationship of an
image’s width to its height. Comprised of two numbers separated by a
colon, the first number denotes the image’s width and the second its height.
For example, an aspect ratio of 1.33:1 means the image's width is 1.33
times the size of its height. If you wanted to eliminate the decimals in this
ratio, it can be (and often is) written as 4:3 instead.
Discover four modern film and TV aspect ratios.
Although there have been many different aspect ratios throughout film and
television history, the four following ratios are the most common today.
Film cinematography aspect ratios:
1.85:1.
Similar to the 16:9 size but slightly wider, whatever you shoot in 1.85:1 will
show on widescreen TVs and computer monitors with thin black bars on
the top and bottom of the screen. Most feature films use this aspect ratio,
but some high-end TV shows also shoot in 1.85.1.
2.39:1.
This is known as the anamorphic widescreen format and is the widest
aspect ratio used in modern cinema. Premium dramatic features best
showcase its wide field of view and ability to capture broad, scenic
landscapes.
Television cinematography aspect ratios:
4:3 or 1.33:1.
Until widescreen HDTVs came on the scene, 4:3 was the normal
ratio for standard-definition television sets. Today, the 4:3 aspect
ratio primarily serves stylistic purposes — for example, giving off
the vintage vibe that was popular before widescreen aspect ratios
became the norm.
16:9.
This is the most used aspect ratio for display on standard size HDTV
widescreens and computer monitors. In addition, 16:9 is also used for most
video filmed for TV and the internet since other film aspect ratios tend to
give off a more cinematic look. In fact, outside of actual in-person movie
theaters, most viewers watch content on 16:9 screens. It’s also the
standard aspect ratio for YouTube content.
Why does aspect ratio matter in film?
The right aspect ratio can make a huge difference in not just how films and
TV are displayed — but also in how they attract viewers and create viral
buzz for the project. The concept is important to both independent
filmmakers and big studio directors.
Due to limited technology, early films could only be produced in a boxy,
almost square format but now advances in screen and camera equipment
offer so many options that filmmakers and video creators alike can ask
themselves which would work best for their films — the old school square
style? Something long and wide or taller and narrower? Learning about
older aspect ratios can help modern creators finesse their current projects.

Unit – II: Composition and Types of Shots:


Types of Shots
A film shot, or camera shot, is a continuous view through a single
camera without interruption. By combining different types of film
shots, movements, and angles, filmmakers can emphasize
different actions and emotions for different scenes.

1. Extreme long shot


First up we have the extreme long shot. Also referred to as an extreme wide shot, it conveys
contextualising information to the viewer about where the action in a scene is taking place or sets
a character in their context. For this reason, an extreme long shot is often used to establish
context and setting at the start of a film or a new scene. When used in this context, an extreme
long shot can be described as an ‘establishing shot’.
2. Long shot
Next, we have the long shot, which can also be called a wide shot. This generally
shows the full length of any featured characters from the feet to the top of the head
and is used to show a character in relation to their immediate surroundings. In this
instance, we learn more about the environment the characters inhabit – here it is a
school. The shot enables us to see the characters interact through their body language,
enabling the audience to draw conclusions about the characters, such as understanding
that they are friends.

3. Mid-shot or medium shot


The mid-shot or medium shot generally shows the character from the waist to the top
of the head. It enables the viewer to see facial expressions in combination with body
language, to show emotion. For this reason, it is great for dialogue shots. In this
example, it makes perfect framing for a news-style report so we can see the reporter’s
face, whilst the framing of the body allows for us to see the reporter’s formal body
language.

4. Close-up
The close-up is often used to show a character from the top of the shoulders to the top
of the head. It’s used for capturing a character’s facial expression, heightening
emotions and building tension. It’s another great shot type for dialogue.

5. Extreme close-up
And lastly, we have an extreme close-up, when an object, item or body part fills the
frame, which is used for emphasis, showing detail and, once again, heightening
emotion. In this instance, the focus on the second hand of the clock suggests that time
will be an important factor in the sequence to follow.
Let’s now move on to camera angles.

6. High-angle
The shot below is at a high angle. Angles can use any of the framing types we’ve
discussed above, but the camera must be positioned at an angle looking down at the
subject. Generally, a high angle is used to make the subject within the frame seem
small, isolated, vulnerable or less powerful. The extremity of the angle can be altered,
often causing the desired effect to be more or less impactful. In this case, the high
angle is used to make the characters seem even more vulnerable.

7. Low angle
The low angle can also be used in combination with any camera shot type, but the
camera must be positioned down low at an angle looking up at the subject. Generally,
a low angle is used to make the subject within the frame seem large, imposing,
daunting or more powerful. The extremity of the angle can be altered, often causing
the desired effect to be more or less impactful. In this case, the low angle wide shot of
these trees makes them look dominant, reinforcing the power of nature.

As you experiment with shot types, framing and angles, you’ll be able to create some
really interesting combinations and think about adding in camera movement in too,
such as pans, tilts and even tracking shots.

Shot
In filmmaking and video production, a shot is a series of frames that runs
for an uninterrupted period of time. Film shots are an essential aspect of a
movie where angles, transitions and cuts are used to further express
emotion, ideas and movement.

Camera Shot Size Overview


 Extreme wide shot/extreme long shot : This shot is used to show the
subject and the entire area of the environment they are in.
 Wide shot/long shot: It’s used to focus on the subject while still
showing the scene the subject is in.
 Medium shot: This shot shows the subject from the knees up , and is
often referred to as the 3/4 shot.
 Medium close-up shot: The subject fills the frame with this shot, and
it is somewhere between a medium close-up and close-up.
 Close up shot: This shot shows emotions and detailed reactions, with
the subject filling the entire frame.
 Choker shot: A typical choker shot shows the subject’s face from just
above the eyebrows to just below their mouth and is between a close-
up and extreme close-up.
 Extreme close-up shot: This shot shows the detail of an object, such
as one a character is handling, or person, such as just their eyes or
moving lips.
 Full shot: A full shot is similar to a wide shot except that it focuses
on the character in the frame, showing them from head to toe.
 Cowboy shot: This is similar to the medium shot except that the
character is shown from the hips or waist up.
 Establishing shot: This is a long shot at the beginning of a scene that
shows objects, buildings, and other elements of a setting from a
distance to establish where the next sequence of events takes place.

What Are The Basic Types Of Camera Shots?


 Close-up
 Medium shot
 Long shot
 Extreme close-up
 Extreme long-shot
Camera Shot Size Summary
The distance your subject is to the camera impacts how the audience feels

about them. Your subject will appear largest in a close-up or choker shot

and smallest in a wide or long shot.

What is Camera Shot Framing?


Camera shot framing refers to how you place or position subjects in

shots. It’s about composing an image rather than just pointing the camera

at the subject. Some considerations when you’re framing the shot are the

relationships between characters in the shot — if there are more than one

— the size of the subject, and the elements on the left and right side of the

subject that create balance.

Types of Camera Shot Framing


 Single shot, where the shot only captures one subject.
 Two shot, which has only two characters.
 Three shot, when three characters are in the frame.
 Point-of-view shot (POV), which shows the scene from the point of
view of one of the characters, making the audience feel that they are
there seeing what the character is seeing.
 Over-the-shoulder shot (OTS), which shows the subject from behind
the shoulder of another character.
 Over-the-hip (OTH) shot, in which the camera is placed on the hip of
one character and the focus is on the subject.
 Reverse angle shot, which is approximately 180 degrees opposite
the previous shot.
 Reaction shot, which shows the character’s reaction to the previous
shot.
 Weather shot, where the subject of the filming is the weather
Shots Indicating Subject Size
 Extreme long shot: Shows the subject from a distance.
 Long shot: Shows the entire person, although they don’t necessarily
have to fill the frame.
 Full shot: Here, the subject mostly fills the frame.
 Medium shot: Shows the subject from the knees up.
 Cowboy shot: Shows the subject from the mid-thigh and up.
 Medium shot: Shows a portion of the subject, often from the waist up.
 Medium close-up: A shot that is between a close-up and medium
shot, often with the subject framed from the shoulder or chest up.
 Close-up: The subject’s head or face fills the screen.
 Choker: A variation of a close-up where the subject’s face fills the
frame from the eyebrows to the mouth.
 Extreme close-up: Emphasizes a small detail on the subject.

What Is A Depth Of Field?


Depth of field is used to describe the size of the area where subject you’re

filming are relatively sharp. The point of focus is the object in the frame

that the filmmaker most wants to call attention to. The imaginary two-

dimensional plane that extends from that point is referred to as the plane of
focus. When you’re filming, any part of the image that falls on the plane of

focus is officially in focus.

Types Of Camera Shot Focus


 Focus pull, where you focus the lens to keep the subject within an
acceptable focus range.
 Rack focus, where the focus is more aggressively shifted from subject
A to subject B.
 Tilt-shift, where parts of the image are in focus while other parts are
out of focus.
 Deep focus, when both the subject and the environment are in focus.
 Shallow focus, where the subject is crisp and in focus while the
background is out of focus.
What is A Camera Shot Angle?
A camera shot angle refers to where the camera is placed to take a shot. It

can be used to express emotion or create a different experience for the

audience. A scene can be shot from different angles to create a more

dynamic viewing and storytelling experience.

What Are The Different Angle Shots In Film?


 High-angle
 Low-angle
 Over-the-shoulder
 Bird’s eye
 Dutch angle/tilt

Types of Camera Shot Angles


Shots Indicating Camera Angle/Placement
 Eye-level shot: This is when the camera is placed at the same height
as the eyes of characters.
 Low angle shot: This shot frames the subject from a low height, often
used to emphasize differences in power between characters.
 Aerial shot/helicopter shot: Taken from way up high, this shot is
usually from a drone or helicopter to establish the expanse of the
surrounding landscape.
 High angle shot: This is when the subject is framed with the camera
looking down at them.
 Birds-eye-view shot/overhead shot: This is a shot taken from way
above the subject, usually including a significant amount of the
surrounding environment to create a sense of scale or movement.
 Shoulder-level shot: This is where the camera is approximately the
same height as the character’s shoulders.
 Hip-level shot: The camera is approximately at the height of the
character’s hips.
 Knee-level shot: The camera is approximately at the same level as
the character’s knees.
 Ground-level shot: When the height of the camera is at ground level
with the character, this shot captures what’s happening on the ground
the character is standing on.
 Dutch-angle/tilt shot: This is where the camera is tilted to the side.
 Cut-in shot: This type of shot cuts into the action on the screen to
offer a different view of something happening in this main scene.
 Cutaway shot: As shot that cuts away from the main action on the
screen, it’s used to focus on secondary action and add more
information for greater understanding for the audience.
 Master shot: A long shot that captures most or all of the action
happening in a scene.
 Deep focus: A shot that keeps everything in the screen in sharp focus,
including the foreground, background, and middle ground.
 Locked-down shot: With this shot, the camera is fixed in one position
and the action continues off-screen.
 Library shot: Pre-existing film of a location that’s pulled from a library.
 Matte shot: A shot that incorporates action in the foreground with a
background that is created on a computer.
 Money shot: An expensive shot that is designed to startle or wow the
audience.
 Top shot: A shot that looks directly down at a scene. Also known as a
birds-eye-view shot.

Camera Shots
Camera Angles
The camera angle marks the specific location at which the movie
camera or video camera is placed to take a shot. A scene may be shot
from several camera angles simultaneously. This will give a different
experience and sometimes emotion.

Types of Shots, Camera Angles, and Movements

Camera Shots and Angles


The way in which you capture a scene has a dramatic impact on how it is
perceived. How you frame the subject, how far they are from the camera,
the perspective from which they are seen, the movement that reveals their
actions every single detail counts when it comes to video. Failing to control
these elements will likely result in unusable footage or, even worse, a
beautiful video that tells a completely different story than the one you
wanted to create. Whether you’re a beginner or just looking to brush up
your video making skills, this collection of types of shots, camera angles,
and movements will help you bring your ideas to life.

01. Establishing shot


The establishing shot is a very wide shot used at the start of a sequence.
It’s used to introduce the context in which the action takes place. Aerial
shots are usually the preferred pick for these scenes, as they offer an
unparalleled view of locations.

02. Long shot

A long shot captures the subject within a wide view of their surroundings.
This type of camera shot is commonly used to set the scene. It gives
viewers a sense of perspective as they can see how the subject relates to
their environment.

A closer version of the long shot is known as a full shot. In a full shot, the
subject fills the frame. This captures the subject’s general appearance,
while still showing the scenery surrounding them.

03. Medium shot

The medium shot is used to reveal more details on the subject, capturing
them from the waist up. As it includes the subject’s hands and part of their
surroundings, it’s the best way to capture actions in detail, while
maintaining a general view. This is why the medium shot is one of the most
popular types of shots.

There are two main variants of this shot: medium long shot and cowboy
shot. The medium long shot sits halfway between long and medium shots.
It frames the subject from the knees up. The cowboy shot, which cuts the
frame at mid-thigh, was widely used in western movies in order to show
gun holsters on cowboys’ hips.

04. Medium close-up shot

The medium close-up shot frames the subject from the chest up. It is
generally used to capture enough detail on the subject’s face, while still
keeping them within their surroundings. During conversations, medium
close-up shots are used to keep some distance between the characters.
05. Close-up shot

A close-up shot tightly frames the subject’s face in order to focus on their
emotions. These types of shots are great to connect with the audience, as
there are no elements distracting them from the subject’s gestures and
reactions.

06. Extreme close-up shot

In an extreme close-up shot, a detail of the subject fills the whole frame. It
is used to emphasize certain features or actions. The most common use of
this shot will capture a character’s eyes, mouth, or fingers performing a
critical action.

07. Two shot

A two shot includes two subjects in the frame. They don’t necessarily have
to be next to each other, nor given equal treatment. In many examples of a
two shot, one subject is placed in the foreground and the other, in the
background.

08. Bird’s-eye view

Bird’s-eye view is the name given to the type of shot taken from an
elevated point. As its own name indicates, it offers a perspective similar to
that which birds see while flying. This camera angle is used to magnify the
scale and movement.

09. High angle

High angle shot is taken pointing the camera down on the subject. As a
result, the subject is perceived as vulnerable and powerless. In this type of
shot, the camera angle can be anywhere from directly above the subject to
just above the subject’s line of sight.

10. Eye level


The eye level shot is considered the most natural camera angle. Capturing
the shot at eye-level offers a neutral perception of the subject. Because it is
the way in which we usually see people, this camera angle can help the
audience connect with the subject.

11. Low angle

A low angle shot is taken from below the subject’s eye line, pointing
upwards. This camera angle makes a subject look powerful and imposing.
This angle can create a visual distortion in types of shots closer to the
subject, as it’s not a common point-of-view. Because of this, a low angle is
commonly used with wider frames such as medium or medium close-up
shots.

12. Worm’s-eye view

The worm’s-eye view camera angle looks at an object or subject from


below. It is commonly used to capture tall elements in the scene, such as
trees or skyscrapers, and put them in perspective. This type of camera shot
is mostly taken from a subject’s point of view.

13. Over the shoulder

An over the shoulder framing captures the subject from behind another
character. Typically, the shot will include the second character’s shoulder
and part of their head. This camera angle is primarily used during
conversations, as it maintains both characters in scene while focusing on
one at a time.

14. Point of view


A point of view shot shows what the character is looking at. It is used to
highlight specific details or actions, such as being threatened or seeing
their reflection in the mirror. This type of shot allows the audience to put
themselves in the shoes of the subject. As a result, it strengthens their
connection with the subject and scene.
15. Pan
Panning is the action of moving the camera horizontally on a fixed axis.
During a pan shot, the camera turns from side to side without changing its
position. This type of camera shot is commonly used to follow an action or
to allow viewers to get a sense of location in the sequence.
16. Tilt

Tilting is a type of shot in films in which the camera is moved vertically on a


fixed base. It is normally used to reveal the identity of new characters or
relate an action with its performer. In some cases, tilt shots are used to
offer a general view of the space surrounding the character.

17. Dolly

On a dolly shot, the camera is attached to a wheeled device and smoothly


moves back and forth. The device itself is known as a camera dolly. Dolly
shots usually follow a subject as they move around the scene, generally in
front of or behind them.

18. Truck

Truck shots are those in which the camera is attached to a device that
moves smoothly along a horizontal track. These shots are most commonly
used to follow an action or walk the audience around a scene. Because the
camera itself is moving, the result allows viewers to feel as if they are also
moving across the scene.

19. Pedestal

A pedestal shot involves moving the camera vertically on a fixed location.


With these movement, the sigh level of the audience is changed while
maintaining a same vision angle. Because the camera is not static on its
axis, new details are slowly revealed to the viewer as the entire frame focus
changes.

20. Roll

In a roll camera movement, the camera is rotated on its vertical axis.


During these type of shots, the camera is pointed at the same subject. As a
result, the footage is gyrated up to 180°. This movement is commonly used
in action scenes or to capture a feeling of sickness and dizziness.
The camera angle marks the specific location at which the
movie camera or video camera is placed to take a shot. A
scene may be shot from several camera angles simultaneously.
This will give a different experience and sometimes emotion.

Eye Level Shot


Have you ever seen a movie where the camera is placed at eye
level and it’s following the actor? It gives an interesting
perspective for filmmakers, but what does it mean for you?
It captures more of your surroundings so it can be used to create
a better context for pictures, videos, and other types of media.
Everyone knows that an eye-level shot is the best way to capture
people in their natural state. But what if you stand on a ladder or
get your camera up high?
What about when it’s not possible to even get close enough for an
eye-level shot?
Eye-level shots are an important cinematography technique that
can be used to show perspective and give the viewer a sense of
what it’s like to experience something from the point of view of
someone in particular.
Low Angle Shot
A low-angle shot is a type of camera angle that typically points
downward from the eye level of the person taking the picture.
These shots make people and objects seem taller and more
dominant than they might be in reality, as well as create an
illusion of depth.
The most common use for this technique is to show power or
dominance on behalf of one side over another, such as when
filming an interview between two people who are sitting at
different heights relative to each other.
This technique can also lend itself well to showing off architectural
features like tall ceilings or columns, making them appear even
larger than life.
This type of shot is best for capturing subjects that are looking up
at something or someone in a position of power.
It creates an interesting perspective and allows your subject to
appear larger than life.
The great thing about this type of shot is that anyone can do it! All
you need is a camera and some creativity!
High Angle Shot
There are many other ways that filmmakers can use this camera
angle to their advantage – for instance, by making the audience
feel small and powerless when surrounded by towering buildings
or rows of enemies.
The high angle shot can be achieved with a crane, scaffolding, or
an elevated surface such as a bridge.
The height at which the camera is placed dictates how low
viewers will feel because they will perceive themselves as higher
than those below them (and vice versa).

Dutch Angle Or Dutch Tilt Shot


This technique has been used in cinema for years and can be
seen in films like Alfred Hitchcock’s “Psycho.” Watch the video to
learn more!
The use of this technique, which creates a distorted sense of
space and movement, is often used by filmmakers to create
suspense.
For example, when an object moves from left to right on screen it
appears that it’s moving much faster than normal because the
camera tilts up as it moves away from us.
Another effect created with this tilt is that objects appear closer
than they actually are because they’re being viewed at an angle
rather than straight-on.
Dutch angle shots have been used for many years in both
photography and film production. They are often seen on
television shows, sporting events, and commercials.
A Dutch angle shot gives an image or video a disorienting feeling
by tilting the camera up from its normal orientation so that
everything looks askew.
It can also be achieved by turning the camera sideways during
filming with your phone in landscape mode instead of portrait
mode.
A Dutch angle shot (also known as a Dutch tilt) is when a camera
is pointed upward or downward, creating an unusual angle. This
type of shot can be used to create unease in the viewer. It’s also
been used to show instability and tension in scenes that require it.
It can be used for artistic purposes, and to create a sense of
unease in the viewer by making it difficult to tell which way is “up.”
The term was coined during World War II when Dutch filmmakers
were commissioned by their government to produce newsreels;
these films had to be more exciting than German ones because
they were meant as propaganda.
Camera Movements
What are Camera Movements? Simply put, a camera movement
is a filmmaking technique that describes how a camera
moves about to help enhance a story. Specific camera
movements help change the audience's view without cutting; they
can be a great way to make your video more immersive and
engaging.
What are Camera Movements?
Simply put, a camera movement is a filmmaking technique that
describes how a camera moves about to help enhance a story.
Specific camera movements help change the audience’s view
without cutting; they can be a great way to make your video more
immersive and engaging. When it comes to User-Generated
Video, you might not be able to control the type of footage you
receive. Luckily for you, many of these movements can be added
in post-production.

Why are Camera Movements Used?


There are three popular types of camera moves: pans, tilts, and
zooms. It's easy to see how these types of moves can be
confusing when it comes to determining which ones you should
use in your video or film. A camera move is a movement of the
camera on its own, without any motion from the subject.
Camera movement can add a lot of meaning to your footage,
changing and shaping a viewer’s perspective of a scene. It’s
essential to understand how your viewers interpret different
types before adding in pans, zooms, tilts, and the like.

Basic Camera Movements


Pan
First up is the pan. A pan is when you move your camera from
one side to the other. Panning generally is helpful to reveal a
larger scene, like a crowd or to reveal something off-screen.

Step your speed up a notch, and you get the whip pan, which is
handy for transitions showing the passing of time or travelling a
distance dramatically or comically.

Tilt
To tilt, imagine your camera is your head nodding up and down.

Tilts are helpful as a ‘reveal’ technique, either to unveil


something from top to bottom or the reverse.

Zoom
‘Zooming’ is probably the most commonly used camera
movement; it lets you quickly move closer to the subject without
physically moving. But be careful with these, as zooming lessens
your image quality.

When you give zooming a go, keep the movement as smooth as


possible.

Tracking shot
A ‘tracking shot’ is one in which the camera moves alongside
what it’s recording. Tracking shots are sometimes called dolly
shots, but they can be differentiated by the direction they take.

Tracking shots will generally follow along the horizontal axis as


the subject moves. You’re probably familiar with walking and
talking scenes where a tracking shot stays on the subjects as
they move.

Tracking shots are also helpful for showing a stretch of road or


scenery.

Rules of Composition
As in visual arts, composition in photography is simply the arrangement of
visual elements within a frame. The term composition literally means
'putting together'. So, to get the perfect shot, the photographer has to
organize all objects in a scene.

 Rules of Shot Composition.


 The Rule of Thirds.
 Balance and Symmetry.
 Leading Lines.
 Eye-Level Framing.
 Depth of Field.
 Deep Space Composition.
The rules of shot composition

First, just like any "rules" in photography or cinematography, the rules of


composition are made to be broken. As much as we rely on these rules in
most cases, the elements of composition are most exciting when they go
against the grain. Before we learn the rules of shot composition, it might be
better to understand what is meant by composition.

COMPOSITION DEFINITION

What is composition?
Composition refers to the way elements of a scene are arranged in a
camera frame. Shot composition refers to the arrangement of visual
elements to convey an intended message.

The Rule of Thirds


Firstly, the rule of thirds is one of the most common camera framing
techniques used in film or photography. It's about positioning a character to
show their relation to other elements in the scene.

Imagine a tic-tac-toe board — two lines running vertical, and two more
running horizontal.

As the camera frames your shot, keep the image on the intersecting lines.
It’s more pleasing to the eye. But also, different camera framing will tell a
different story. It is an easy way to determine the character's place in the
world.

The frame composition in Nightcrawler is, well, crawling with this rule. Lou
appears on the side of the frame, away from the world he exists in.

Gilroy's use of the rule of thirds isolates Lou, highlighting this as the main
theme. The rule of thirds can also be used with two characters.

Remember to draw that imaginary tic-tac-toe board. Put a character on


either side of the intersecting lines. Gilroy also uses this technique to show
the darker side of the character. In many scenes, Lou is placed off-center
and only the side of his face is visible.

It’s hard to trust someone we rarely ever see.

The director's decisions to position Lou this way, showing only his profile,
creates an untrustworthy, distant, character.

Mastering frame composition and framing in film also allows you to break
some of the rules of composition.

RULES OF COMPOSITION

Balance and symmetry

Understanding frame composition rules is invaluable knowledge for


directors and cinematographers. And so is knowing when to break them.

Shooting a perfectly symmetrical shot, breaking the rule of thirds, is used


for very specific reasons. Gilroy puts Lou in the center of the screen,
ignoring the rule of thirds.

Artists use this technique to direct the viewer’s eye to a specific place. And
leading the eye to the center of the screen might end up serving your story
better, and garner more emotion. Past films have done this well. Balance
and symmetry in a shot can be very effective.

Consider these examples from Stanley Kubrick's best movies.

They often reveal character traits and power dynamics.

Or they create a place so perfectly symmetrical, the audience feels


instantly overwhelmed. If you know anything about Wes Anderson's
directing style, you know he loves a symmetrical frame.

Working in tandem with the rules of composition, blocking and staging is


also responsible for creating dynamic frames.

BLOCKING DEFINITION

What is blocking?
Blocking is the way the director moves actors in a scene. The director's
approach to blocking is dependent on the desired outcome (e.g., for
dramatic effect, to convey an intended message, or to visualize a power
dynamic).

Blocking the actor in a symmetrical shot can be a very effective way to lead
the viewer to a certain feeling or emotion. This video is part of
our Filmmaking Techniques Masterclass allows us to visualize the power of
blocking and staging.

Blocking Actors to Advance Story • Get the Entire Masterclass Series


Note that leading the eye of the viewer should be your priority in every
scene you frame. Blocking often uses leading lines to control what the
audience sees, and how they see it. This affects how they interpret it.

FILM COMPOSITION TECHNIQUES

Leading lines

Leading lines are actual lines (or sometimes imaginary ones) in a shot, that
lead the eye to key elements in the scene.

Artists use this technique to direct the viewer’s eye but they also use it to
connect the character to essential objects, situations, or secondary
subjects. Whatever your eye is being drawn to in a scene, leading lines
probably have something to do with it.
It is a very useful type of shot composition as it conveys essential context
to the audience. Let's see how its used in Nightcrawler. This stringer
scenes use leading lines to take us to the accident.

Can you spot them?

The diagonal line from Lou's feet to the back wheels of the police car, help
frame the shot. It is a leading line that interestingly enough, also represents
what his camera is able to capture.

Another leading line is established by the character's movement. He is


walking towards the accident with a camera, a straight line.

Both the diagonal and straight line frame the crash as the focus. What's
interesting is that his camera is also doing that inside of the scene.

Here's a great breakdown of how composing with leading lines creates


depth and draws the eye.

While this rule of composition helps lead us to our focus, other techniques
help us connect to our focus.

Eye-level framing

Eye-level framing positions the audience at eye-level with the characters,


which plants the idea that we are equal with the character. Leading the eye
and the mind to consider how we would feel if we were there, because it
almost feels like we already are.
Nightcrawler uses this technique to elicit empathy for its anti-hero. No
matter how Lou's character read on the page, what we saw on screen, was
somebody just like us.

Follow the image link to see a full StudioBinder storyboard of pivotal


moments in the film.

This sense of connectedness is important when watching an anti hero. It


makes room for empathy. An emotion needed to stay connected to a very
flawed character. And so, we stick around for the ride.

Just by showing the viewer the eye's of the character, the audience sees
into their soul. It might not be a steadfast rule of shot composition, but it is
an effective technique.

A close-up on his eyes signals that Lou's state of mind and inner feelings
are important right at that moment. It allows us to feel what he’s feeling. It is
the easiest way to garner empathy.

COMPOSITION EXAMPLES

Depth of field

To understand cinematography is to understand depth of field.

Mastering spacial composition in the frame is one of the hallmarks of


effective visual storytelling. But before we learn how to manipulate depth to
benefit our story, why don't we first define depth of field.

DEPTH OF FIELD DEFINITION

What is depth of field?


Depth of field describes the size of the area in your image where objects
appear acceptably sharp. That area is called the field, and the size of that
area is the depth of that field.

Depth of field is essentially your zone of sharpness. If you make that zone
longer, bringing more objects into focus, you will have a deep depth of field.

Similarly, if you make that zone shorter or smaller, with less in focus, you
will have a shallow depth of field. One way to achieve this adjustment is by
using the lens aperture.

Now that we know a little more about this, we can manipulate our depths of
field to convey different feelings, tones, and relationships between objects.

RACK FOCUS DEFINITION

What is rack focus?

A rack focus in filmmaking is changing the focus during a shot. The term
can refer to small or large changes of focus. If the focus is shallow, then the
technique becomes more noticeable.

One of my favorite ways to manipulate an image is to use rack focus. It


changes focus right in the middle of a shot.

If the filmmaker starts off with a large depth of field, and in the same shot
moves to a shallow one, the new focused element becomes the
centerpiece for the scene.

It dramatizes it just by switching the types of camera focus, manipulating


the depth of field. Check out the video below to see this technique
in Casino Royale.
But shooting elements in focus isn't the only way to tell your story.

Shooting out of focus can be just as powerful. We'll see


in Nightcrawler how the director uses blurring to illustrate his themes.

BOKEH DEFINITION

What is bokeh?

In photography, bokeh (/ˈboʊkeɪ/ BOH-kay; Japanese: [boke]) is the


aesthetic quality of the blur produced in the out-of-focus parts of an image
produced by a lens. Bokeh has been defined as "the way the lens renders
out-of-focus points of light".

Bokeh is the circular blurring produced from out of focus images.

A huge theme in Nightcrawler is isolation. To show this, Gilroy uses rules of


shot composition well with bokeh.

This kind of shot helps distance Lou from the world around him.

He is connected only to himself.

The director also, keeps people out of focus. This technique is used to
reveal more about Lou’s alienation from society.

RULES OF COMPOSITION

Deep space composition


Because there are so many nuances in shot composition, it's sometimes
hard to keep track of all these techniques. I felt it best to create a separate
section for deep space composition apart from depth of field.

We will define deep space shots, as well as something called deep focus,
and determine how they all relate to each other.

We then will examine how they often work together to capture intentional
(and incredible) moments in film.

DEEP SPACE COMPOSITION DEFINITION

What is deep space composition?

Filmmakers use deep space when significant elements in a scene are


positioned both near and far from the camera. These elements do not have
to be in focus.

Unlike, deep focus, defined by elements both near and far from the
camera in focus.

Citizen Kane's famous deep focus scenes are still some of the best
examples of how knowing the rules of shot composition can help you tell a
deeply personal story.

In this scene, deep space composition positions the characters at different


depths, while all four stay in focus.

The different depths are indicative of what is going on with each character.
The little boy appears far away, but in frame, to remind us that he is going
to be out of the picture soon, once they send him away.
With Lou positioned far from the camera and Nina stationed a bit closer, we
see deep space at work. This separation from each other highlights their
different personalities.

FILM COMPOSITION

Wrapping up the rules of composition

Like we said before, the rules of composition are more like suggestions.
They are meant to guide and assist, not to limit or prohibit. There are many
instances when you really should obey the rules of composition, and other
times when breaking them is ideal.

White Balance
What is white balance?
White balance refers to the colour temperature at which white objects
on film actually look white. But it's not just about the appearance of
white; all the colours in your shot are determined by how you set your white
balance.
The function that corrects these color issues is the digital camera's "white
balance." Essentially, white balance adjusts images to make white
subjects look white in the final product. By making good use of white
balance, you'll be able to manipulate the tone of your pictures at will.

White balance is a vital camera setting to properly calibrate in


order to achieve your desired image. If you have ever taken a
photograph or video and found the result to look unnatural blue or
yellow, then improper white balance is to blame. We’ll break down
what white balance is, how to calibrate it, and what to do if you
have already shot your footage without white balancing
beforehand.
WHAT IS WHITE BALANCE

First, let’s define white balance


Whether you are a still photographer or a filmmaker, white
balance will play an important role in the images you capture. The
process of calibrating this essential camera setting is much the
same for stills and for video.
WHITE BALANCE DEFINITION
What is white balance?
White balance is a camera setting that establishes the true color
of white. This produces a baseline from which all other colors are
measured. White may not appear “white” under all lighting
conditions, so this helps correct it. White balance can be
automatically determined by the camera, chosen from a list of
presets, or manually set by the user.
What Does White Balance Do:
Applies to still photography and videography
Defines “white” and adjusts all other colors accordingly
Improper balance leads to a yellowish or blueish image
HOW TO DETERMINE WHITE BALANCE
Understanding color temperature
The reason why setting white balance is necessary comes down
to the color of the light upon the subject, also called the colour
temperature. Whether dealing with natural light outdoors or man-
made lighting fixtures indoors, light can come in a wide variety of
intensities, values, and temperatures.

What is white balance?


White balance refers to the colour temperature at which white objects on
film actually look white. But it’s not just about the appearance of
white; all the colours in your shot are determined by how you set your white
balance. This is why photographers and filmmakers plan every scene in
consideration of white balance, from lighting to post-production. Learn how
to use white balance in your videos, plus get expert tips on how to set,
adjust and stylise your white balance.

How does colour temperature affect white balance?


All light resides on a colour temperature scale, which is why different light
sources have different colours. Think about the warm orange glow of
candlelight compared to the cold, almost bluish beam of fluorescent lights.
To measure the temperature of colours, filmmakers use the Kelvin scale.
Colours at lower temperatures are warmer and on the red, orange and
yellow side of the spectrum: for example, incandescent bulbs or sunset.
Colours at higher temperatures have a cooler colour cast to them. Natural
light resides on this blue end of the scale.

The correct white balance for a scene depends on the colour temperature
of the light. “In the most basic terms, your white balance tells you that if
your whites are off, your colour temperature’s off. And if your colour
temperature is off, you've got to figure out if your camera is set at the
wrong colour temperature or if the issue is in your lighting,” says
videographer Hiroshi Hara.

How to gauge white balance in different lighting situations.


The first step to solving any white balance issues on set is to learn the
colour temperatures of the different lights. Unless you work in a controlled
environment where you can control the colour temperature of your light
sources, you’ll need to familiarise yourself with a few standard scenarios to
keep your white balance in check.

Daylight
The standard temperature for outdoor natural light is 5,600 degrees Kelvin
(K). This means that if you want a white piece of paper to appear white in
your shot, you would need to set your white balance to 5,600 K. This is the
industry standard setting, but it’s just a starting point. A sunny day with a
blue sky might be slightly warmer than an overcast, cloudy day. Sunset and
sunrise will almost always have a much lower colour temperature than high
noon.
Tungsten
For indoor lighting, also referred to as tungsten light, the standard setting is
3,200 K. Light bulbs and other artificial lighting usually have warmer
temperatures than outdoor lighting, so if you move your piece of paper from
outside to inside, you need to dial down your white balance to compensate
for the warmer colour temperature. Like daylight, tungsten settings vary
across the spectrum, from warm incandescent lights to LEDs that sit closer
to daylight temperatures.

Adjust your white balance settings in-camera.


Your eyes adjust automatically to different colour temperatures, but
cameras can’t. You have to tell your camera the proper white balance for a
given scene. While you can do this in post-production, it’s best to get your
white balance as close to accurate as possible in-camera. Doing this will
save you time in the long run and ensure that you’ve got the proper visual
data in your footage to work with when you edit.

This doesn’t mean you have to achieve to-the-degree accuracy. Most


digital cameras can shoot in raw format, which leaves a lot of room for
editing in post. Hara recommends starting with a white balance preset, like
daylight or tungsten and manually adjusting from there. You can do this in
many different ways; here are a few options.

Automatic white balance or manual white balance?


Auto white balance (AWB) is a great choice for beginners. Your camera’s
white balance setting is likely good at reading ambient light and making
balance adjustments on its own. But if you’re a more seasoned
videographer, consider switching to customised white balance to give
yourself more control. You’ll also want to set white balance yourself in tricky
lighting conditions that can easily fool the camera’s judgement.

White balance cards and grey cards.


Professionals often use these cards to help get the proper exposure and
white balance for their shot. “A white balance card is just a fancy term for
something that’s white,” says Hara. An official card will give you the most
accurate tone and anti-reflective finish, but you can use any pure white
object as a stand-in. This card serves as a reference point for your camera,
it makes it easier for you to test for the proper balance in situations where
you don’t know the colour temperature.

A grey card is similar to a white card in that they are both reference points
to gauge white balance and exposure. But a grey card is a specific shade
of grey made to be completely neutral. This makes it easier for your
camera to read the light and choose the best white balance. To use a grey
card, simply place it in front of the camera while on customised white
balance mode and take a few shots. This is the manual version of using
AWB, in which your camera searches for neutral areas in the frame for
itself.

Lighting
“Any Time you mix lighting sources, it’s going to make it hard to find your
white balance,” says videographer Margaret Kurniawan. Use a single light
source or match every light to the same temperature to avoid different
colour temperatures across your scene. You can also use a light metre to
get a reading on the temperature. Aim for consistency across your set,
lighting and camera so you can minimise time spent correcting colours in
post-processing.
Fine-tune your white balance in editing.
Just because you nail your white balance in-camera doesn’t mean you
should leave it untouched in post-production. “There are two sides to
manipulating white balance,” says cinematographer Mike Leonard.
“There’s colour correction, which is the science side of it and then there’s
colour grading, which is the art of it.” Colour correction is about bringing
colours back to their accurate tones for a true-to-life look. You can set the
correct white balance in-camera and continue this process in post.
Colour grading, on the other hand, is a subjective art. “A great film example
of this is The Matrix,” says Leonard. When they’re in the Matrix everything
has a harsh green hue, but when they’re in the real world there’s a very
distinct blue gradient. That was a creative decision to make the two worlds
feel very different.”

Whether you want to evoke happiness and nostalgia with warm colours or
cooler, bluer tones for a moodier aesthetic, you can create the look you
want with video editing apps like Adobe Premiere Pro.

Best practices for better balancing.


From pre-production to post-production, white balance is a top
consideration for every filmmaker. If you’re not sure how to hone your eye
for white balance, start out on automatic white balance and adjust from
there. Raw files leave plenty of room to tweak when you edit, so don’t worry
about getting it exactly right. Try to experiment and familiarise yourself with
the spectrum of colours and its corresponding temperatures. To get faster
at gauging white balance on set, pay attention to the light off set. Notice the
quality of light around you in everyday life; you’ll be surprised how quickly
you can develop an eye for different temperatures and white balances.

For White Balance


Unit – III: Lighting
1. Light and its Properties
Lighting is a fundamental to film because it creates a visual mood,
atmosphere, and sense of meaning for the audience. Whether it's
dressing a film set or blocking actors, every step of the cinematic process
affects the lighting setup, and vice-versa. Lighting tells the audience where
to look.

Lighting is a key factor in creating a successful image. Lighting


determines not only brightness and darkness, but also tone, mood, and
atmosphere. Therefore, it is necessary to control and manipulate light
correctly in order to get the best texture, vibrancy of colour, and luminosity
on your subjects.
The primary properties of light are intensity, propagation direction,
quantity, quality, and color, frequency or wavelength spectrum and
polarization.

The primary properties of light are intensity, propagation direction,


frequency or wavelength spectrum and polarization. Its speed in a
vacuum, 299 792 458 metres a second (m/s), is one of the fundamental
constants of nature.
Light has four attributes you can work with to make your scenes look the
way you intend. The color, intensity, quality and direction of your light
sources all play a role in determining the overall look of your video.

Together with light, color is one of the most important elements of


photography. It affects everything from composition and
visual appeal to the viewer's attention and emotions.

2. Different Types of Lights

The two main categories of filmmaking light sources


include artificial and natural light. Artificial lights can be
either on-camera or off-camera, while natural light nearly
always comes from an outside source such as the sun or a
window.

Video lights have four different types of bulbs or lamps. They


are incandescent (tungsten), fluorescent, halogen (or quartz)
and Hydrargyrum Medium Arc-length Iodide better known as
HMI. Incandescent – The incandescent or tungsten bulb is just
like the bulb in your living room lamp.
What Are the Different Types of Lights/Bulbs?
 Tungsten/Incandescent light bulbs: An incandescent
light bulb incandescent lamp or incandescent light
globe is an electric light with a wire filament heated until
it glows.
 Basically, an incandescent light bulb is a controlled
fire on display. When electrical current makes contact
with the base of the bulb, electricity enters and heats
the tungsten filament housed inside. And when the
filament heats up, “incandescence” is created, which is
light produced by heat.
 ‎Halogen incandescent light bulbs.
 Fluorescent light bulbs.
 Compact fluorescent bulbs (CFLs)
 LED light bulbs

Lighting creates a visual mood in a photograph, and in


photography, there are two main types of lighting: hard light
and soft light. A skilled photographer should know the
difference between hard light and soft light, how to create
each, and which one works best for a given shot.
Tungsten lighting kits for film and video production have
been a tried-and-true, industry-standard lighting choice for
years. They're similar to the incandescent filament bulbs
common in interior lighting, so they are a great choice for
interior lighting setups.
Others Tools used in Lighting: Diffusers, Reflectors, Cutters,
and Gels
Choosing Filter and Gel Accessories
There are a few accessories and products you can use in
conjunction with gels and diffusers. For example, filter and gel
frames hold lighting gel sheets in place. The gel fits inside,
making it possible to swap out lighting gel sheets for different
effects. They come in different shapes such as rectangles,
squares, octagons, or circles. Other filter and gel
accessories include magnetic holders and mounts, gel and
diffusion extenders, and pouches and storage bags, among other
useful items.

3. Other Tools Used in Lightings:


Diffusers
Light diffuser
A light diffuser can be created in various ways, but
understanding what it does in the most fundamental way will
help you understand how you can achieve diffused light
through whatever means you have.
Light Diffusers:
Light diffusers scatter light to make it softer and more natural.
They're available in a variety of different materials. While diffusion
gels consist of plastics similar to colored gels, fabric diffusers are
also an option. You can use fabric diffusers and colored
photography gels to lessen the intensity of studio lights and to
introduce colors and tones for different effects. Diffusers reduce
exposure and soften the light in varying measurements, known as
stops. Purchasing diffusers in various strengths is an easy way to
add more variety to light options in your setup.
What is a light diffuser?
To create diffused light, cinematographers and gaffers often
use a light diffuser. To put it simply, a light diffuser is a semi-
transmittant piece of material placed in between a light source
and a subject to diffuse the light as it passes through the
material. This material does not solely block or cut light, but
redirects light as it passes through for a diffused light spread on
a subject.
What is a light diffuser used for?
Creating soft lighting
Creating soft shadows
Less contrast of Light
How a Light Diffuser Works
To understand what a light diffuser is, it’s important to
understand how light travels. As light travels from its source
through the air, it interacts with any particle on its path. As
the light photons interact with more and more material, its
path is changed.
The more material or particles that a light source’s photons
interact with before reaching a subject, the wider the spread
of photons and light is on a subject. This is known as
diffused light. Diffused light is used for soft lighting and less
harsh shadows. It is often more flattering on a subject. To
visually understand diffusion and how it is created, check out
this video by DSLR Video Shooter.
Diffused light is light that has an even concentration
across the spread of its beam. Otherwise known as "soft
light," It disperses light evenly across a surface or subject
and can soften shadows and produce a more flattering
image.
A light diffuser is a mechanism for scattering your light
output. Light diffusion reduces harsh shadows and balances
your lighting effects, creating even, soft light (like a
lampshade) on your subjects.
Diffused light is a soft light with neither the intensity nor
the glare of direct light. It is scattered and comes from all
directions. Thus, it seems to wrap around objects. It is softer
and does not cast harsh shadows.
Unlike a reflector, a diffuser doesn't change the direction
of the light — just its intensity and spread. Although a
reflector may soften the light a little bit, a diffuser is much
more powerful and creates a more natural-looking and
complimentary light.

Getting the right lighting is essential for both outdoor and


indoor photography. How well you control the light dictates
the quality of your photos. You have to be able to adjust the
intensity, direction, and color of the light to put the amount of
light you need in the right place — whether you use natural
light, an artificial light source, or a combination of the two.
Two simple options for modifying the light in photography
are reflectors and diffusers.

Reflectors:-
What is a Reflector?
A reflector is an object with a highly reflective surface. As a
result, when the light hits the reflector, it bounces back at an
equal angle. This allows you to change the direction of the light
onto your subject.

For example, if the light source is on the subject’s left side, you
can place a reflector on its right side and achieve illumination from
both sides. By adjusting the reflector’s position, you change the
reflection angle and control the lighting effect.

Reflectors are a great way to use a single light source in multiple


ways. They often take the place of a secondary light source. They
work equally well with flash and continuous light sources, sunlight,
and artificial light sources.
A reflector does what its name suggests – it reflects light. It is a
simple tool that helps a photographer manipulate light by
providing an additional surface for the light to bounce off. A
reflector does not create light, but it simply directs or redirects the
existing light from a source.

How to Choose a Reflector


Reflectors are characterized by reflectivity and color. A reflector
with a shiny surface produces bright reflections, while one with a
matte surface reflects less light and produces a soft and even
light. The color of the reflector also influences the quality of the
light. They usually are white, silver, or gold. White and silver
reflectors don’t alter the color of the light, while gold ones make
the light look slightly warmer (e.g., sunset light). Of all colors,
silver has the highest reflectivity.
When to Use a Reflector
Reflectors are helpful anytime you have a single, directional light
source but need more than that. For example, when sunlight
produces harsh light and shadows, you may want to use a
reflector to balance the illumination onto your subject and reduce
shadows. For example, imagine a person sitting near a window
and bright sunlight coming through the window. If you take a
photo like that, the person will be half in light and half in shadow.
A reflector reduces the differences between the two halves and
creates a more balanced exposure.

You may also want to use a reflector when the light comes from
behind the subject, and you need it to come from the front too.
Some photographers use a reflector to reflect the light coming
from a flash to make it less harsh and obvious.
Cuttters: –
Cutters or A flag is a device used in lighting for motion
picture and still photography to block light. It can be used
to cast a shadow, provide negative fill, or protect the lens from
a flare.
Cutters are large, irregular-shaped light flags. Flag cutters
can keep light from spilling over to unwanted areas. They will
also protect your lens from light flare.
Cutters i.e A flag is a device used in lighting for motion
picture and still photography to block light. It can be used to
cast a shadow, provide negative fill, or protect the lens from
a flare.
Lighting Gels:-
What Are Lighting Gels? Lighting gels are colored, translucent
sheets of thin plastic. Photographers, filmmakers, and stage
lighting technicians use them as filters to correct color and lighting
issues.
Simply defined, a gel is a transparent colored material used to
modify lights for photography and cinematography, placed
over light sources to create colorful effects. The two basic
types are color-correction gels and non-corrective, color-effect
gels.
What Are Lighting Gels?
Lighting gels are colored, translucent sheets of thin plastic.
Photographers, filmmakers, and stage lighting technicians use
them as filters to correct color and lighting issues. Use colored
gels for lights, and color correction filters and gels to create
various tones and moods for movies, to replicate either daytime or
nighttime lighting, or to balance intensity, light temperature, and
other parameters.

Basic Lighting Techniques:-


The key light, backlight, and fill light all make up the three-point
lighting setup. Three-point lighting is a standard method used in
visual media. By using three separate positions, the
cinematographer can illuminate the subject any way they want,
while also controlling shadows produced by direct lighting.

1. Natural Lighting
2. Key lighting
3. High Key Lighting
Low Key Lighting
4. Fill Lighting
The Three-Point lighting setup
5. Backlighting
6. Practical Light
7. Hard Lighting
8. Soft Light
9. Bounce Lighting / Bounce light summary:
10. Side Lighting or Chiaroscuro Lighting
12. Motivated Lighting
13. Ambient Light
Lighting Techniques:
 Key Lighting. The key light is also known as the main film light of
a scene or subject.
Cinematography and film lighting is closely similar to photography
lighting. You’ve probably heard many of these techniques,
especially if you’ve done some studio photography in the past, but
it helps to learn how they can uniquely benefit filmmakers in
creating different moods and atmospheres in every scene.
It’s also important to note that these techniques are not clear-cut,
so many of them can actually take the form of several other film
lighting techniques. What matters is that you learn what each is
good for and are able to make the best use of them for achieving
your cinematic goals. The following are all the different types and
techniques of lighting in film:

 Key Lighting
 Fill Lighting
 Back Lighting
 Side Lighting
 Practical Light
 Hard Lighting
 Soft Lighting
 Bounce Lighting
 High Key
 Low Key
 Motivated Lighting
 Ambient Light
1. Key Lighting
The key light is also known as the main film light of a scene or
subject. This means it’s normally the strongest type of light in
each scene or photo. Even if your lighting crew is going for a
complicated multi-light setup, the key light is usually the first to be
set up.

However, just because it’s your “main” light doesn’t mean it


always has to be facing your subject. You can place your key light
anywhere, even from the side or behind your subject to create a
darker mood. Just avoid placing it near or right beside the camera
as this will create flat and direct lighting for your subject.

When to Use Key Lighting:


 Use key lighting when you want to draw attention to a subject or
make it stand out from the rest of the scene.

2. Fill Lighting
As the name suggests, this technique is used to “fill in” and
remove the dark, shadowy areas that your key light creates. It is
noticeably less intense and placed in the opposite direction of the
key light, so you can add more dimension to your scene.
Because the aim of fill lighting is to eliminate shadows, it’s
advisable to place it a little further and/or diffuse it with
a reflector (placed around 3/4 opposite to the key light) to create
softer light that spreads out evenly. Many scenes do well with just
the key and fill studio lighting as they are enough to add
noticeable depth and dimension to any object.
When to Use Fill Lighting:
 Use fill lighting to counteract shadows, or to bring up exposure
and decrease the contrast in a scene. With fill light, your viewer
can see more of the scene clearly.

3. Backlighting
Backlighting is used to create a three-dimensional scene, which is
why it is also the last to be added in a three-point lighting setup.
This also faces your subject—a little higher from behind so as to
separate your subject from the background.

As with fill lighting, you’ll want to also diffuse your backlight so it


becomes less intense and covers a wider area of your subject.
For example, for subject mid-shots, you’ll want to also light up the
shoulders and base of the person’s neck instead of just the top of
their head. This technique can also be used on its own, without
the key and fill lights if you’re aiming for a silhouette.

When to Use Backlighting:


 Use backlight to accentuate the silhouette of a subject, whether
it’s a person or an object. Backlighting creates a halo effect for
increased impact.
4. Side Lighting
Needless to say, side lighting is for illuminating your scene from
the side, parallel to your subject. It is often used on its own or with
just a faint fill light to give your scene a dramatic mood or what’s
referred to as “chiaroscuro” lighting. To really achieve this effect,
your side light should be strong so as to create strong contrast
and low-key lighting that reveals the texture and accentuates the
contours of your subject.

When used with a fill light, it’s advisable to lessen the fill light’s
intensity down to 1/8 of that of the side light to keep the dramatic
look and feel of a scene.

When to Use Side Lighting:


 Side lighting brings out the textures or edges in a scene. Using
side lighting creates a better sense of depth in a location. It can
make subjects seem farther off by accentuating the space
between them.

5. Practical Lighting
Practical lighting is the use of regular, working light sources like
lamps, candles, or even the TV. These are usually intentionally
added in by the set designer or lighting crew to create a cinematic
nighttime scene. They may sometimes be used to also give off
subtle lighting for your subject.

However, practical lights are not always easy to work with, as


candles and lamps are typically not strong enough to light up a
subject. A hidden, supplementary motivated light (more on that
later) may be used or dimmers can be installed in lamps so the
light’s intensity can be adjusted.
When to Use Practical Lighting:
 Use practical lighting when a performer or subject needs to
interact with a light source. For example, use a bedside lamp
that needs to function within the action of the scene.

6. Bounce Lighting
Bounce lighting is about literally bouncing the light from a strong
light source towards your subject or scene using a reflector or any
light-colored surface, such as walls and ceilings. Doing so creates
a bigger area of light that is more evenly spread out.
If executed properly, bounce lights can be used to create a much
softer key, fill, top, side, or backlighting, especially if you don’t
have a diffuser or soft box.

When to Use Bounce Lighting:


 Bouncing light off the ceiling creates more diffuse illumination
and results in even, soft light. When you need more ambient
light across a whole environment, bounce light is a great
choice.

7. Soft Lighting
Soft light doesn’t refer to any lighting direction, but it’s a technique
nonetheless. Cinematographers make use of soft lighting (even
when creating directional lighting with the techniques above) for
both aesthetic and situational reasons: to reduce or eliminate
harsh shadows, create drama, replicate subtle lighting coming
from outside, or all of the above.
When to Use Soft Lighting:
 Soft lighting is more flattering on human subjects. The soft
quality of the light minimizes the appearance of shadows,
wrinkles, and blemishes. Use soft lighting for beautification.

8. Hard Lighting
Hard light can be sunlight or a strong light source. It’s usually
unwanted, but it certainly has cinematic benefits. You can create
hard lighting with direct sunlight or a small, powerful light source.

Despite it creating harsh shadows, hard lighting is great for


drawing attention to your main subject or to an area of the scene,
highlighting your subject’s contour, and creating a strong
silhouette.

When to Use Hard Lighting:


 Hard lighting emphasizes changes in contour, shape, and
texture. Use hard lighting to create a more intense look.

9. High Key
High key refers to a style of lighting used to create a very bright
scene that’s visually shadowless, often close to overexposure.
Lighting ratios are ignored so all light sources would have pretty
much the same intensity. This technique is used in many movies,
TV sitcoms, commercials, and music videos today, but it first
became popular during the classic Hollywood period in the 1930s
and 40s.

When to Use High Key Lighting:


 Use high key lighting for dreamy sequences, or situations that
require overwhelming brightness.
10. Low Key
Being the opposite of high key, low key lighting for a scene would
mean a lot of shadows and possibly just one strong key light
source. The focus is on the use of shadows and how it creates
mystery, suspense, or drama for a scene and character instead of
on the use of lighting, which makes it great for horror and thriller
films.

When to Use Low Key Lighting:


 Use low key lighting for moody scenes that require a film noir
look or for nighttime scenes.

11. Motivated Lighting


Motivated lighting is used to imitate a natural light source, such as
sunlight, moonlight, and street lamps at night. It’s also the kind of
lighting that enhances practical lights, should the director or
cinematographer wish to customize the intensity or coverage of
the latter using a separate light source.
To ensure that your motivated lighting looks as natural as
possible, several methods are used, such as the use of filters to
create window shadows and the use of colored gels to replicate
the warm, bright yellow light coming from the sun or the cool, faint
bluish light from the moon.

When to Use Motivated Lighting:


 Use motivated lighting when you want to replicate a specific
light source’s quality of light. Filters, diffusers, and other
modifiers are helpful in these applications.
12. Ambient Lighting
Using artificial light sources is still the best way to create a well-lit
scene that’s closely similar to or even better than what we see in
real life. However, there’s no reason not to make use of ambient
or available lights that already exist in your shooting location, may
it be sunlight, moonlight, street lamps, or even electric store signs.

When shooting during the day, you could always do it outdoors


and make use of natural sunlight (with or without a diffuser) and
supplement the scene with a secondary light for your subject
(bounced or using a separate light source). Early in the morning
and late in the afternoon or early evening are great times for
shooting outdoors if you want soft lighting. The only downside is
that the intensity and color of sunlight are not constant, so
remember to plan for the weather and sun placement.
When to Use Ambient Lighting:
 Use ambient lighting when you want to illuminate your subjects
without worrying about a specific style or quality of light.
Ambient lighting is a relatively universal light source that evenly
illuminates whole environments or scenes.

What are the three principles of light?


The three principles of lighting are direction, intensity, and
softness or hardness.

Direction
Direction refers to where the light or lights are coming from in
relation to the camera. Some common terms that refer to direction
of light is back light, top light, frontal, and profile. There are often
several different directions of light working together to make up
the totality of the lighting direction. If the light is hard enough, you
can often tell from which direction the light is coming.
Intensity
The intensity of the light is how much light is hitting any part of
your scene. The intensity can and often does vary from one part
of the frame to another. It also varies from one subject to another.
When working on set you will often hear that there should be a 4-
to-1 ratio from one side of the face to the other. Alternatively, you
can have a 3-to-1 ratio from the subject to the background. This
means the intensity of the light should be four times greater on
one side of the face in reference to the other and three times
greater on the subject than the background.

Softness or hardness
Unlike direction or intensity, the softness or hardness of the light
is a more subjective quality. Hard light is often used to create
more mystery and drama. Soft light is often used when the drama
is not quite so intense or for more of a naturalistic look.

Having a good storyline, a capable film crew, well-cast actors, and


an amazing set design may all be essential components to
creating a successful film—but it also has to look visually
compelling if you want it to have a meaningful impact on the
viewers. This requires technical knowledge in cinematography,
which means using the most appropriate cinematic shots and film
lighting techniques to get your message across perfectly in each
and every scene.
Proper lighting techniques are essential in creating stylized and
natural-looking film scenes that look much closer to real life as
digital sensors and film don’t react as well to light as our eyes do.
This is why film sets always seem to be overly lit or packed with
many different light sources that serve different purposes.
If you’re aiming to become a cinematographer, director, writer, or
any other person who holds a creative role in a film crew, you’ll
need to learn some of the basic lighting techniques typically used
in filmmaking.

How Lighting Can Impact The Mood Of A Movie?

Lighting can be very impactful in setting the mood of the scene in a motion
picture. It, thus, plays an important role in making films. It shows us what
is happening on the screen. The simple idea of how lighting is used in a
film suggests the technicality which will make the scene look better. If you
are a movie buff, you must have certainly noticed that the frame of the
scene looks different whether the camera angle, actors, or settings are
repeating or not. This depends on the element of lighting. It is an
important tool that not only sets and creates the mood of the film but also
makes the scene much more impactful.

There are different types of film lighting techniques which will set the mood
for the movie:
Three-point Lighting:

It is the most significant form of lighting used today. This lighting is used to
make the scene less dramatic and balance the shadows and the
highlights which aren’t done. As the name of the lighting suggests, the
lights are placed at three different positions: the key light, backlight, and
fill light.

Harsh Light:

Harsh lighting is strong, directed light that is often present in the middle of
the day. It creates deep shadows with sharp lines on your subjects. It is a
technique that includes more contrast in the shot than the three key light
setups.

Soft Light:

The soft light will create shadows which will slowly transform from light to
dark. This gentle gradation encapsulates the notion of soft light. When soft
light is presented upon the subject, they won’t have any particularly visible
harsh lines. Before reaching the subject, the soft light will pass from a
medium. Some examples of soft light are indirect light, cloudy day, or
sunset light.

Ambient Lighting:

It is natural light. Also known as no filter light in filmmaking. This lighting


tells more about the environment in comparison to the subject itself. The
light helps to portray real scenarios in the film. We feel like we must be
following the character rather than simply watching them.

Warm Lighting:

This form of lighting is considered with red, yellow, and orange tones. This
form of lighting is usually used in youth dramas and romantic stories.

Unit – IV: Sound


Audio Elements in Video Programmes:
Lip synchronized Sound:
Lip sync or lip synch is a technical term for matching a speaking
or singing person's lip movements with sung or spoken vocals.
‎ ip sync or lip synch (pronounced /sɪŋk/, the same as the word
L
sink, short for lip synchronization) is a technical term
for matching a speaking or singing person's lip movements
with sung or spoken vocals.
Lip Synchronization is a process taking place on the receiver end
, where the audio and the video tracks are synchronized.

Lip synchronization:
Recording of sound simultaneously with photographing of action
so as to secure perfect synchrony of both (as when a motion
picture is projected)
2
Recording of sound and photographing of action at separate
times but utilizing techniques designed to secure synchrony of
sound and action when the two are combined
Voice Over
Voice-over (also known as off-camera or off-stage
commentary) is a production technique where a voice—that is
not part of the narrative (non-diegetic)—is used in
a radio, television production, filmmaking, theatre, or
other presentations. The voice-over is read from a script and may
be spoken by someone who appears elsewhere in the production
or by a specialist voice actor. Synchronous dialogue, where the
voice-over is narrating the action that is taking place at the same
time, remains the most common technique in voice-overs.
Asynchronous, however, is also used in cinema. It is usually
prerecorded and placed over the top of a film or video and
commonly used in documentaries or news reports to explain
information.
Voice-overs are used in video games and on-hold messages, as
well as for announcements and information at events and tourist
destinations. It may also be read live for events such as award
presentations. Voice-over is added in addition to any existing
dialogue and is not to be confused with voice acting or the
process of replacing dialogue with a translated version, the latter
of which is called dubbing or re-voicing.
Music
Music : Music is the art of producing pleasing or expressive
combinations of tones especially with melody, rhythm, and
usually harmony. : a musical composition set down on paper.
bring your music. sounds that have rhythm, harmony, and
melody.
The science or art of ordering tones or sounds in succession, in
combination, and in temporal relationships to produce a
composition having unity and continuity.
Music, art concerned with combining vocal or instrumental sounds
for beauty of form or emotional expression, usually according to
cultural standards of rhythm, melody, and, in most Western
music, harmony. Both the simple folk song and the complex
electronic composition belong to the same activity, music.
It can help craft a wide range of emotional responses from an
audience, create rhythm for clips and scenes, and emphasize
the overall story, even in marketing. Music adds to the
experience of a video, regardless of if it is a blockbuster film,
television sitcom, advertisement, or brand video.
The Importance of Music
Music touches us – it is a universal language of human-kind.
Whether it is being listened to or performed, it is incredibly
unifying. Music grants us feelings we can not convey by simply
talking. The influence music has on human emotion makes it an
extremely powerful tool for us in the world of marketing, or for
anyone who produces video.
Music in video can serve several functions. It can help craft a
wide range of emotional responses from an audience, create
rhythm for clips and scenes, and emphasize the overall story,
even in marketing. Music adds to the experience of a video,
regardless of if it is a blockbuster film, television sitcom,
advertisement, or brand video.
f you are having trouble understanding the importance of music in
video, imagine you’re watching a movie, television show, or
commercial, silent except for dialogue. Although some films use
this technique to convey a sense of gravity or silence, most of the
time, it can be highly uncomfortable to watch.
When you’re producing a commercial or another video to market
your business, most of the time you want to make your audience
as comfortable as possible. A video created specifically for a
business might include music so the audience feels at ease and
develops a connection to the company.
The Role of Music in Video
Music plays a vital role in video – it is one of the most crucial
steps in post-production and is an excellent way to capture
attention and communicate your brand.
Although music is a key component in video, a beginner may
overlook its importance or choose a song just because they like it,
which is detrimental to the success and quality of a video.
Experienced editors effectively choose music that enhances the
video and communicates the main message. Given the thousands
of songs that exist, choosing music for your video may seem
intimidating, but it is going to help your video and your brand more
than you can imagine.
As you start to develop your video, the information outlined below
can help you better understand why music plays such an
important role in your video, and how to choose the right music for
your video and brand.
Music Captures Attention
Music holds the attention of an audience – it shapes emotion and
motivates viewers. Whether it is inspirational or sad music, it can
be used as a signal to help viewers know what to pay close
attention to and how to feel.
Music builds value, making a product more memorable as the
music lingers in a viewer’s mind. Whether it’s a high-energy brand
video or a down-to-earth explainer video, you want your music to
be personalized instead of using generic stock music. Stock
music is a less expensive alternative to the use of well-known
music in a video. However, a lot of stock music sounds almost the
same – using it might not make your video or company
memorable.
For example, commercials and radio stations use original, catchy
jingles to establish their brand and make them recognizable to
audiences. Using music to establish an emotional connection with
a brand increases brand recognition and drives customers to
discover and share more of your brand’s content.
Music can give your business the boost it needs in order to win
the war for attention and develop a genuine relationship with
viewers.
Music Communicates your Brand
Music is a key component in conveying your business and brand
message. Music speaks volumes about your brand, which is why
the role of music is so important in video. Both puns intended,
music accompanying your video must be in tune with your
message. Every choice you make surrounding your business
conveys something about your brand- including music! If you are
making multiple videos within your company, music can vary quite
a lot from video to video. It’s one of the things that help distinguish
your videos from each other. Music reinforces the specific
message your brand is trying to convey to an audience and
reveals your brand’s personality.
Music also establishes mood and elicits certain emotions. With
the right music, you can make someone associate your brand
with a certain feeling, enabling them to take interest in your
company, brand, or product. Ideally, the music in your video
creates a positive feeling and therefore, gives the viewer a
positive feeling; or it might convey the gravity of a situation which
your business is helping to resolve.
Think of music as an opportunity to create meaning for your brand
by employing interesting musical pieces. Through the right music
choice, the customer imagines your identity, and buys into the
message of the video. With music, your video, and therefore your
brand and product, adopts meanings which are inherent in the
music.
Ambience
In filmmaking, ambience (also known as atmosphere, atmos, or
background) consists of the sounds of a given location or
space. It is the opposite of "silence".
An ambient film is largely plotless, focusing on character
through a more objective yet also more intimate viewpoint. In
ambient films we see characters live their lives in long takes that
are typically soundtracked with diegetic sound.
Ambience is another word for atmosphere in the sense of the
mood a place or setting has. If an expensive restaurant has
soft lighting and peaceful music, it has a pleasant, soothing
ambience.

Sound Effects
A sound effect is an artificially created or enhanced sound, or
sound process used to emphasize artistic or other content of
films, television shows, live performance, animation, video games,
music, or other media. Traditionally, in the twentieth century, they
were created with foley.
Sound is important because it engages audiences: it helps deliver
information, it increases the production value, it evokes emotional
responses, it emphasises what's on the screen and is used to
indicate mood.
A sound effect (or audio effect) is an artificially created or
enhanced sound, or sound process used to emphasize artistic or
other content of films, television shows, live performance,
animation, video games, music, or other media. Traditionally, in
the twentieth century, they were created with foley. In motion
picture and television production, a sound effect is a sound
recorded and presented to make a specific storytelling or creative
point without the use of dialogue or music. The term often refers
to a process applied to a recording, without necessarily referring
to the recording itself. In professional motion picture and
television production, dialogue, music, and sound effects
recordings are treated as separate elements. Dialogue and music
recordings are never referred to as sound effects, even though
the processes applied to such as reverberation or flanging effects,
often are called "sound effects".

Use of Microphones
A microphone is a device that translates sound vibrations in the
air into electronic signals and scribes them to a recording medium
or over a loudspeaker. Microphones enable many types of audio
recording devices for purposes including communications of many
kinds, as well as music vocals, speech and sound recording.
Microphones are used wherever sound needs to be picked up
and converted into an electrical format. Microphones are an
essential part of any audio recording system. The microphone
picks up the sound and converts it into electrical energy that can
then be processed by electronic amplifiers and audio processing
systems.
Microphones are used in many applications such as telephones,
hearing aids, public address systems for concert halls and public
events, motion picture production, live and recorded audio
engineering, sound recording, two-way radios, megaphones, and
radio and television broadcasting.

Types of Microphenes
There are 4 types of microphone:
 Dynamic Microphones.
 Large Diaphram Condensor Microphones.
 Small Diaphram Condensor Microphones.
 Ribbon Microphones.

There are so many types of microphones to choose from these


days. Audio companies make microphones for recording music,
podcast microphones, and gaming microphones, among many
others. Not to mention built-in microphones on headphones,
webcams, and speakers. To pick the right mic for the right task,
it’s helpful to understand the characteristics and behavior of the
various types of microphones.

What is a Microphone or mic ?

Microphones capture sound waves in the air and turn them into
identical electrical signals. To replicate the original audio, you can
send the signals from the mic’s output to a mixer or audio
interface for recording or to studio monitors (or mixing
headphones), which turns them back into sound waves. But to get
something good out of your speaker, you need to make sure
you’re getting something good to begin with. So here’s our primer
on what to expect from different mics.
Each of the three primary types of microphones—dynamic
microphones, condenser microphones, and ribbon microphones—
has a different method for converting sound into electrical
signals.
All three have the same core construction, though. The capsule,
sometimes called a baffle, picks up the sound and converts it to
electrical energy. Inside the capsule is a diaphragm, a thin
membrane that vibrates when it comes in contact with sound
waves, initiating the conversion process.
The ideal types of microphones for a given situation directly
capture your intended audio source, such as your voice or a
musical instrument, without picking up any other nearby sound.
For example, a singer on stage needs a microphone that will
capture her voice while minimizing the pickup of the instruments
in her band. (The audio from other non-intended sources picked
up by a mic is referred to as “bleed” or “leakage.”)

Do the types of microphones record differently?

One of the most crucial specs of any microphone is its polar


pattern or the direction(s) from which the microphone picks up
sound. Some microphones can only pick up sound directly in front
of them, others can pick up sound from any direction.
The three most common polar patterns
are cardioid, omnidirectional, and bi-directional (aka “figure-8”).
Cardioid microphones are unidirectional. They pick up
significantly more sound from the front of the capsule than the
back and sides. The name cardioid stems from the heart-like
shape you see in a diagram of its polar pattern.
Most consumer and hobby-grade unidirectional types of
microphones feature at least one of three cardioid patterns.
Supercardioid mics are more focused on the front than cardioid
mics but have a small lobe that picks up from the back but at a
much lower level.
Hypercardioid mics prevent more audio bleed from the sides but
pick up a little more noise directly behind the capsule. Engineers
often choose hypercardioid or supercardioid mics when even a
cardioid mic gets too much bleed from other sources.
Taking this to its logical extreme, a shotgun mic—only picks up
sounds directly in front of it and only from a distance. Shotgun
mics feature a lobar pattern, a modified version of hypercardioid
or supercardioid that’s even more directional. You’ll often see
them mounted to high-end video cameras.
If you don’t care about ambient noise, you can use an
omnidirectional microphone, which picks up equally from all
directions. They’re great for situations where you want to record
sources on more than one side of the mic. If you’ve ever seen
multiple singers gathered around a single microphone, they were
likely using one of these.
An omni mic is best used in a recording studio setting, where
you can control ambient noise, or in a situation where you want to
record everything around you. Imagine recording an acoustic
guitar in a cathedral, and you wanted the room’s acoustics to be a
part of the recording.
Lastly, bi-directional mics pick up equally from either side of the
capsule but reject the sound coming from the front. That makes
them useful in the studio for ensemble miking situations where
you want to record, say, a background singer on either side of the
mic but minimize the bleed from an instrument positioned in front
of it.

Does what’s inside a microphone make a difference?

The various microphones—dynamic, condenser, and ribbon—


feature different technologies to convert sound waves to electrical
signals.
Dynamic microphones— Use electromagnetic induction to
convert sound waves to an electric signal. Inside the capsule is a
Mylar diaphragm with a conductive coil attached to it. When
sound waves vibrate the diaphragm, it moves the coil in a
magnetic field, creating an AC voltage. As a result, dynamics are
sometimes called moving-coil dynamic microphones.
They’re durable and versatile. Dynamic mics are less likely to
overload and distort than condenser mics when capturing high
SPL sources such as drums, guitar amps, horns, and vocals.
Their capsules tend to be less delicate than condenser mics,
making them well-suited to be handheld vocal mics for live
performance.
They also have lower sensitivity than condenser mics, requiring
higher sound pressure levels (SPL)—that is, louder sources—to
operate.

Condenser microphones—
A condenser mic usually requires an external power source to
charge it. They typically pull their charge, called “phantom power,”
from a mixer or audio interface.
Of the three types of microphone designs, condenser mics
generally offer the best high-frequency audio reproduction, which
makes them the most common choice for capturing the nuances
of voices. Their high-end response also allows them to reproduce
better transients, the peaks at the beginning of a sound wave.
Hand percussion, such as shaker and tambourine, and acoustic
guitar, also benefit from accurate transient reproduction.
Condenser mics come in two basic categories: large-diaphragm
and small-diaphragm. Large-diaphragm mics are usually defined
as having diaphragms that are 1 inch or larger. Typically, large-
diaphragm condensers have a more well-rounded frequency
response and work best for recording voices. Small-diaphragm
condensers have the best high-end response and are preferred
for recording instruments.
Many large-diaphragm condensers offer multiple polar patterns so
that you can switch between cardioid, omni, and bi-directional.
Some even let you customize the polar pattern to fine-tune its
directional focus.
Ribbon microphones are technically a form of dynamic
microphone but are generally treated as a separate design
because they work and sound very different than their traditional
counterparts. The ribbon design includes an extended rectangular
diaphragm made of thin aluminum with magnets at either end.
When sound waves hit it, it vibrates to create an electrical charge.
Most ribbon mics feature a bi-directional (figure-8) polar pattern.
A great ribbon mic offers the most natural sound reproduction. Its
frequency range most closely mimics human hearing, so audio
doesn’t come in as bright as on condensers or dynamics, but
vocals and instruments sound very clear and natural. Ribbon mics
are primarily used in recording studios, where you can get perfect
positioning and protect them, as they tend to be more delicate
than the other types.

What’s the proximity effect?

Another critical concept you should know about certain


microphones is the proximity effect, which is present in all mic
types except those with omnidirectional patterns. It manifests in
an increase in low-frequency response the closer the mic gets to
the source. If you’ve ever noticed how much deeper your voice
sounds when you put the mic right up to your mouth, what you’re
experiencing is the proximity effect.
The effect is most noticeable in sources with a lot of low-
frequency content, such as male voices. Radio personalities have
long used the proximity effect to make their voices sound bigger
and more authoritative.
Although you can use the proximity effect to help thicken or
deepen the sound of the source, you have to be careful not to add
too much low-frequency information. One of the reasons that
singers use pop screens in front of mics in the studio is because
the accentuated low frequencies from the proximity effect mean
that the mic will pick up more plosives (popped consonant sounds
like “P” and “B”).
The proximity effect can also be problematic when you place a
mic close to an acoustic guitar’s soundhole, pointed directly at it.
That’s where most of the bottom end of the guitar emanates from,
and the proximity effect exaggerates it more. Sometimes, if you
want that soundhole tone, you could use an omnidirectional mic
because it doesn’t exhibit the proximity effect.

How to pick from the types of microphones

When choosing from the many types of microphones, the most


important question to ask yourself is, “What do I need it for?”
If you’re in the market for the best podcast microphone and only
want to record your voice, a large-diaphragm condenser is
probably your best choice because it excels in vocal reproduction.
You won’t need multiple polar patterns, just cardioid. For your
podcast recording if you don’t own an audio interface. The
advantage of a USB microphone is that you can plug such a mic
straight into a compatible port on your computer without any
additional hardware.
However, if you’re looking for an all-purpose studio microphone
for recording—something like the $499 Shure SM7B, a
great microphone for recording vocals, or its direct, equally
recommendable competitor, the $299 Universal Audio SD-1—
you’ll presumably be using an audio interface. Hence, a large-
diaphragm condenser XLR microphone with multiple patterns is
probably your best choice because of its quality and versatility.
But suppose you’re strictly looking for a mic for recording acoustic
instruments. In that case, you might want to consider a small-
diaphragm model that will likely give you a superior high-
frequency and transient response.
You can’t go wrong with a cardioid dynamic mic for live vocals. It’s
the near-universal choice of live sound engineers worldwide. If
you like to move about the stage while performing, you might
consider a wireless microphone, which will include a receiver in
addition to the mic.
What Is A Microphone?
Despite which type it is, every microphone is a transducer. A
transducer is a device which converts energy from one form
to another. A microphone converts acoustic energy in the
atmosphere to voltage in a cable.
All microphones work on the same basic principle. A membrane in
the microphone, called a diaphragm, moves sympathetically with
the movement of air particles around it. This mechanism is similar
to that of the temporal membrane in the human ear. You can see
a simple illustration of this concept below.
The Five Types of Microphones
Although each microphone uses a diaphragm to capture
movements in the atmosphere, the method used to convert that
movement into electrical currents varies. The following sections
explore the five methods.
Dynamic Microphones
How They Work
Dynamic microphones are also called moving coil
microphones. They function on the following principle: as a
coil of wire moves in relation to a magnet, a voltage is
created on the wire.
In a dynamic microphone, the diaphragm is attached to a coil
of wire. The coil of wire surrounds a magnet.
The diaphragm is usually made of aluminum alloy or other low-
mass material, so that it can be moved by the low-mass particles
of the air.
As the diaphragm shifts forward and backward with the
movements in the atmosphere, the coil of wire also moves.
Because the coil surrounds around a stationary magnet, as the
coil moves around the magnet, a voltage is created on the wire.
Take a look at the model below.
Speakers operate on this same principle, but in reverse. You can
see a detailed animation of how a speaker works by Animagraffs.
This is a website that creates amazing 3D models of various
technologies.

What They Do Best


Dynamic microphones are the most common type of microphone.
The following qualities are what make them so popular:

 Durable
 High-Mass
 Directional
 No Inherent Noise

Durable
Dynamic microphones relatively simplistic in their
construction, which makes them more rugged than the more
delicate types. Because they are so compactly constructed, the
noise from handling a dynamic microphone is greatly reduced.
The Shure SM58, a classic handheld dynamic microphone, is
known for its virtual indestructibility. These mics are notorious for
their ability to be dropped, tossed, and accidentally hit with
drumsticks, while maintaining a consistent sound quality
throughout their lifespan.
The durability and rugged construction of moving coil
microphones makes them great for live sound applications.
High-Mass
Dynamic microphones are relatively heavy, or massive. This
makes them less sensitive than other microphone types.
When a microphone has low sensitivity, it means that it can
handle a louder input.
More sensitive microphones will sound great for quiet sources,
but will start to distort the signal at higher levels. Dynamic
microphone diaphragms are generally heavier than the
diaphragms in other types of microphones. Although this means
they require more gain, it also means they can accurately capture
very loud signals without distortion.
If you put a sensitive microphone on a very loud guitar amplifier,
snare drum, or horn, for example, the sound will completely
overwhelm the diaphragm and cause saturation. These high-
output sources require a microphone that can accomodate the
sounds they create.
Directional
Dynamic microphones are capable of omnidirectional and cardioid
polar patterns.
Most have a cardioid polar pattern, meaning that they pick up
sounds best from in front of the diaphragm and reject
sounds best from behind the diaphragm. Although other
types of microphones are also available with a cardioid polar
pattern, dynamic microphones are superior in their ability to
reject sounds from the sides and rear.
This has many practical applications.
Firstly, the high directionality of dynamic microphones can
help capture only a particular instrument or source, even
when other sources of sound are nearby.
Whether you are trying to capture a vocalist standing next to a
drum kit, one horn in a brass section, or a podcast guest in a
noisy room, the superior directionality of dynamic microphones
can be useful.
The directionality of moving coil microphones also helps in
sound reinforcement situations.
Any time you send the signal from a microphone to speakers in
the same room, there is the danger that the sound from the
speakers will enter the microphone and create a feedback loop.
Dynamic microphones are able to provide more gain before
feeding back through the speakers. This is another reason
they are ideal for live sound.
No Inherent Noise
As you will see in the following sections, some other types of
microphones contain more complicated circuitry than the dynamic
type. The circuitry of those microphones may add some benefits,
but they come at a cost.
Dynamic microphones use simple, passive circuits to
convert sound to electricity. This simplicity offers the benefit
of no inherent noise, meaning you can use more gain without
starting to hear hiss or hum.
Condenser Microphones
How They Work
Condenser microphones are also called capacitor
microphones. They function on the following principle: If two
metal plates are in close proximity, the closer they are, the
higher the capacitance.
Capacitance is the ability of a system to hold an electrical charge.
In a condenser microphone, an electrically conductive diaphragm,
usually made of gold-sputtered mylar, is suspended in close
proximity to a solid metal plate. When sound waves interact with
the diaphragm, it moves back and forth relative to the solid metal
plate. This change in distance from the backplate to the
diaphragm creates a change in capacitance, and an electrical
signal is created.
An impedance conversion circuit must be placed after the
output of the capacitor to make the signal usable in an audio
system. This circuitry requires +48V-DC, known as phantom
power.
In some applications, such as cell phones and computers, electret
condenser microphones are used which utilize permanently
charged material and do not require phantom power. You can see
a simplified diagram of a condenser microphone below.
What They Do Best
Condenser microphones are also very common in professional
audio. They are useful, thanks to the following qualities:

 Low-Mass
Variable Polar Pattern

Low-Mass
There are a few practical advantages of the low-mass diaphragm
in a condenser microphone. Firstly, these low mass of these
diaphragms makes them more capable of capturing transient
sound waves. A transient is the short, high-amplitude burst
at the beginning of a sound wave. If a snare drum is struck, the
microphone must move very quickly to capture the sound
accurately. The heavier the diaphragm, the longer it takes to
respond to the sound wave. Condenser microphones are
excellent in capturing these quick changes in sound pressure.
Another advantage to the low-mass diaphragms found in
condenser microphones is their frequency
response. Compared to other microphone types, condensers
have the widest frequency response. This means that condenser
microphones are capable of capturing variations in the air that
cycle very quickly. This property allows condenser microphones
to capture more detail from the sound source.
Inverse to the high-mass dynamic microphone diaphragms,
condenser diaphragms have high sensitivity. This has the
positive effect of increased clarity and ability to capture low
level sounds. However, this low mass also makes them
susceptible to saturation in high sound pressure level
applications.
Variable Polar Pattern
Most condenser microphones have a fixed polar pattern of
cardioid or omnidirectional.
However, because the depth of a condenser microphone
circuit is very small in comparison to moving coil circuits,
some condenser microphones offer the ability to vary the
polar pattern with a switch.
For example, the AKG C414 has the ability to operate in
omnidirectional, bidirectional, cardioid, wide cardioid, and
hypercardioid.
This is accomplished through the use of two diaphragms in
close proximity. The signal from each is mixed together, and
the phase interactions between the two signals creates
cancellation of sounds which enter from certain angles.
This property makes condenser microphones capable of being
very versatile. Not all condenser microphones are capable of this.
The microphones that do offer multipattern functionality, however,
can be an engineers best friend. These microphones generally
come at a financial cost, but their versatility makes them very
valuable in a variety of situations.
Ribbon Microphones
How They Work
Ribbon microphones are also called velocity microphones,
and are technically a variation of a dynamic microphone. The
function on the following principle: as an electrically
conductive ribbon moves within the field of a magnet, a
voltage is created on the ribbon.
In a ribbon microphone, a corrugated ribbon is suspended in the
field of a permanent magnet. The ribbon is typically made of low-
mass, aluminum alloy. When sound interacts with the suspended
ribbon, the ribbon moves in relation to the magnet. This creates a
voltage on the ribbon, which is connected to the output of the
microphone.

What They Do Best


Ribbon microphones are a bit less common in live situations, but
are still very common in studio settings. The following
characteristics account for this:

 Low-Mass
 Low-Sensitivity
 Directional
 Non-Linear Response
Low Inherent Noised

Low-Mass
The low mass of a ribbon diaphragm makes for an excellent
frequency response. As small changes in the velocity of air
particles occur, the ribbon can follow. This creates an electrical
signal that very closely represents the original sound. The Royer
R-121 is known for its transparent, realistic sound and remarkably
flat frequency response.
Although the low mass of the diaphragm benefits the sound
of a microphone, the durability of a ribbon microphone
suffers. The ribbon can be anywhere between .6-to-4 microns
thick. Compare this to the 100-micron diameter of a human hair,
and it becomes clear just how small a ribbon diaphragm is.
Sudden gusts of air, and even the sustained pull of gravity in a
certain direction can damage these microphones. This is
especially true of vintage models.
Low-Sensitivity
The low mass of condenser microphones make them very
sensitive. However, ribbon microphones have a low mass
while maintaining their ability to handle high SPL (Sound
Pressure Level). This makes them a great choice for capturing
detailed, transient sounds without fear of diaphragm saturation.
Directional
Ribbon microphones are inherently bi-directional. Sounds
from the front and back of the ribbon (perpendicular to the
ribbon) are picked up evenly. Thus, sounds from the side
(parallel to the ribbon) will place pressure on both sides of
the ribbon evenly and will result in no movement of the
ribbon at all.
This feature can be useful in many situations. In broadcast,
especially in the early days, ribbon microphones could capture
two people speaking while rejecting sound from an audience or
equipment rack nearby. The directionality can also be useful in
capturing the sound of a choir in a reverberant space, allowing
both the signal and the reverb to be recorded.
Non-Linear Response
The ribbon diaphragm does not respond linearly to sound
pressure levels. This means the correlation between sound
pressure and voltage is not parallel. This quality mimics the
human perception of loudness. The result is a more natural
sound that does less to embellish certain characteristics of a
sound source. This characteristic makes ribbon microphones
excellent for stereo recordings of an instrument or ensemble.
Low Inherent Noise
Ribbon microphones often utilize passive circuitry and are
thus less susceptible to electronic noise than condenser
microphones. The voltage on the ribbon simply represents the
movement of the ribbon itself, isolated from any electronic noise.
This allows ribbon microphones to be very quiet while maintaining
their ability to react quickly to small movements in the air.

Carbon Microphones
How They Work
Carbon microphones operate on the following concept:
When carbon granules are compressed, their resistance
decreases.
Carbon microphones are not commonly used in the modern
world, but were used in telephony and broadcast in the early days
of the technology up until the late 70s. A battery is required to
create an electric current to flow through the carbon granules. As
sound interacts with the carbon, the granules are compressed.
This changes the resistance of the carbon, in turn increasing and
decreasing the current with the movements of the air. Carbon
microphones suffer from significant noise and limited frequency
response.

Crystal Microphones
How They Work
Crystal microphones are also called piezoelectric
microphones. They function on the following principle: When
certain crystals are placed under mechanical force, a voltage
is produced.
This is called the piezoelectric effect. As sound places pressure to
diaphragm or to a crystal directly, the crystal is flexed. This flexion
creates an electrical charge on the crystal, which represents the
vibrations in the air. Crystal microphones are also not widely used
today. They do not offer wide frequency response and operate at
a high impedance not suitable for professional applications.
Audio Mixers for recording
A mixing console or mixing desk is an electronic device
for mixing audio signals, used in sound recording and
reproduction and sound reinforcement systems. Inputs to the
console include microphones, signals from electric or electronic
instruments, or recorded sounds. Mixers may control analog or
digital signals.
An audio mixer is a device with the primary function to accept,
combine, process and monitor audio. Mixers are primarily used
in four types of environments: live (at a concert), in a recording
studio, for broadcast audio, and for film/television. An audio mixer
can come in either analog or digital form.
There are two builds of Mixers
In - Line
An in-line mixer means there are two paths per channel. For
example, in a recording environment, an in-line mixer can use
the same channel to send and receive sound to and from a
digital audio workstation (DAW).

Split Monitor
A split monitor console has one path per channel. Each
channel can be used to either send or receive sound to or from a
DAW. Both of these builds have two areas for us to explore: the
channels and the master section.
Before we begin, let’s discuss some terminology and vocabulary.
An audio mixer may also be referred to as a “console,” “desk,” or
“board.” All three of these terms are synonyms with “mixer.”
Additionally, we must define “signal.” Signal is the generalized
term for any audio passing through a mixer. This could be a
vocal, drum, bass, synth, guitar or another instrument. All of it is
referred to as signal.

Regardless of the application, build or form of the mixer, most


are structured in a similar fashion. Let’s examine the parts,
features, and prices of audio mixers.

Parts of a Channel

Input Section

The input area of the channel strip may accept any or all of the
following levels of signal: mic, instrument, -10 line level or +4 line
level. Depending on which level of signal is desired to be
accepted, specific knobs and/or switches must be engaged and
used. For example, if a microphone is wired into a channel, the
channel must be configured to accept mic level signal. Then, the
mic pre knob must be raised in order to add gain to the signal.

Wiring a guitar, bass or another instrument may use a separate


input jack (probably for an “unbalanced” ¼” cable). In this case,
there may be a switch or button labeled “DI” in the input section.
If so, pressing this button will tell the mixer to accept the
instrument level signal (rather than mic level or line level).

The mixer will also be able to accept +4 line level, -10 line level
or both. This normally takes the form of deselecting all switches
and buttons but may vary between consoles. Some mixers have
a knob referred to as the “line trim.” This serves the same
purpose for line level signal as the mic pre does for mic level
signal: it adds or subtracts gain at line level.

EQ

Most mixers have an equalization (EQ) area on the channel strip.


The amount of flexibility and precision of frequencies that can be
adjusted in the EQ area is typically related to the overall
sophistication of the board. For example, most budget and
moderate-level mixers have fewer frequencies that can be
targeted and adjusted with broad strokes, while high-end desks
have EQ areas that can be used as a scalpel.

Dynamics

High-end mixers commonly have a “Dynamics” area, either on


each channel or in the master section. Common dynamic effects
included in this area are compressors and gates. When a switch
is engaged to activate the dynamics area, the compressor will
make the voice or instrument sound more even (loud parts
become softer, soft parts become louder). The gate will open to
allow any signal to pass through the channel, then close to keep
out background noise and bleed.

Fader

The fader is the device that raises or lowers the amount of


audible signal from the channel. A fader is actually a resistor: as
it is lowered, it increases resistance. This is why the numbers
start high at the bottom of the fader bank on each channel and
decrease to the zero point (usually located about two-thirds of
the way to the top).

These numbers indicate a general idea of loudness (more


advanced users will recognize these numbers as the logarithmic
decibel scale). The most important number on the fader bank is
zero. The zero point on the fader is referred to as “unity,”
meaning there is no resistance applied to the channel’s signal.
Any fader position above zero is amplifying the signal as
opposed to varying the amount of resistance as it is moved up or
down below zero.

Group Faders

Group faders on an audio mixer are used to control multiple


channel faders at once. A common use for a group fader is to
raise or lower all of the faders that contain drum signals together
at once — the collective mix of the drum faders can be raised or
lowered using a group fader (also referred to as a subgroup). If
included in a console’s construction, there may be anywhere
from two to eight group faders.

In many cases, these group faders are actually voltage-


controlled amplifiers, meaning there is no audio passing through
them. These are commonly referred to as VCA Group Faders.

Auxes

Mixers have a dedicated area for auxiliaries (auxes) used to


send a copy of the channel’s signal to another destination. In the
studio environment, auxes are used for creating a headphone
mix or adding a time-based effect (such as reverb or delay). On
a live console, auxes are used for time-based effects, but also
may be used to send sounds to in-ear monitors or a monitor
wedge on stage.

These copies can be taken before or after the fader, referred to


as “pre-fader” or “post-fader” auxes. A pre-fader aux sends the
same amount of level to the new destination regardless of the
fader’s position. A post-fader aux maintains the “wet-to-dry ratio.”
This means each post-fader aux has its own level control and the
effected signal can be raised and lowered with the fader.
With this in mind, application of these auxes becomes easy. Pre-
fader auxes are used for headphone mixes so adjustments to the
fader are not heard in the performer’s headphone mix. Post-fader
auxes are used for time-based effects so the amount of the
effect (reverb, etc.) stays consistent with respect to the fader
level.

Bus Assignment

A bus is no more than a path on which signal can travel. This is


relatable to any bus driving down the street — and every bus
needs a destination. (Here in Los Angeles, the buses stop before
they drive into the ocean…most of the time.) The bus
assignment area on a mixer is the same concept: we are sending
signal down a path that leads to a destination. Common
destinations are external pieces of gear, audio subgroups or an
audio interface.

The most common bus assignment is the stereo bus. The stereo
bus is a two-channel mix (left and right) of all the faders on the
console. It can also be thought of as the sum of all the faders.
Most consoles require a button or switch to be engaged in order
to send the channel to the stereo bus (and therefore be audible
in the mix).

Track Busses & The Routing Matrix

Track busses are a collection of paths for our signal to travel. If


the mixer is equipped with track busses, there are commonly
anywhere from four to sixteen located on the top of each
channel. The busses can be routed to an audio interface for
recording, other channels for summing or to a piece of outboard
gear (such as a compressor).

More advanced consoles contain anywhere from twenty-four to


forty-eight busses and additional routing options. If so, this area
is referred to as “the routing matrix.” A routing matrix possesses
the same functionality as the track busses but expanded routing
and destination options are available.

Pan

The purpose of the pan pot (short for potentiometer) is to pan a


channel’s signal left or right across the stereo bus. An in-line
console will have two pan pots, each assigned to a designated
path. A split monitor desk will have one pan pot, normally located
above the fader.

Panning information can also be translated to track busses or the


routing matrix: if more than one bus is selected on the channel
and the fader has been assigned to those busses, the stereo
image of the signal will be retained.

The Master Section

The master section provides areas for global adjustments to the


channels or modes of the console. It is typically located in the
middle of the desk. Some manufacturers refer to the master
section as the “Centre Section.”

Common adjustments made in the master section are toggling


which level of signal the channels are accepting, master levels
for auxes and busses, toggling the control room source (what is
playing through the speakers), changing the control room level
(volume of playback) and speaker selection. There also may be
a foldback area (used for configuring headphone mixes) and
stereo effect returns (a destination for the return of outboard
equipment).

Patch Bay

A patch bay is a device located next to an audio mixer that has a


series of jacks meant for moving signal from one place to
another on the console. The jacks are organized by row and
offer increased flexibility.

Additionally, they serve an organizational purpose. Cables from


the mixer and outboard equipment can be wired directly to the
patch bay so signal can be routed easily without having to
connect cables directly to a device. Patch bays are most
common in the studio environment but can be used in other
situations as well.

Flipping the polarity on one of the channels allows us to hear the


sum frequency response of both channels and verify there are no
phase issues detracting from the sound. Make sure to use
whichever polarity position produces a full, thick sounding result.

Phantom Power

A Phantom power button is present on each channel of most


every mixer. Phantom power is required for condenser
microphones: it adds 48 volts of direct current via the XLR cable
used to wire the microphone to the mixer. The phantom power
button may be labeled “48,” “+48,” “+48v” or “phantom power.”

Polarity

A button on most mid-level and high-end mixers is a polarity flip,


commonly using the Ø symbol. Flipping the polarity on a channel
changes the phase relationship. Phase is defined as a time
relationship between two waveforms.

An inverse phase relationship can cause frequency builds and


cancellations, which will negatively affect the sound. Anytime
there is more than one mic on the same source (e.g. top and
bottom snare), we must flip the polarity on one of the channels in
order to check the phase relationship.
Flipping the polarity on one of the channels allows us to hear the
sum frequency response of both channels and verify there are no
phase issues detracting from the sound. Make sure to use
whichever polarity position produces a full, thick sounding result.

PAD

Many mixers have a button labeled only with a number rating,


such as “-10” or “-10dB.” This is the PAD for the channel. PAD is
an acronym for “Passive Attenuation Device,” which softens the
sensitivity of the capacitor inside of the mic, allowing louder
signals to pass through the channel without distortion. The
number rating (eg -20dB) is the strength of the PAD, measured
in decibels. The higher the decibel rating, the stronger the PAD.

Filter

High-pass filters are available as a button on the channels of


most mixers. The high pass filter will dramatically soften the low-
end frequency response of the channel and is useful for
decreasing or removing rumble from a signal. Low-end mixers
may only have a symbol that looks like this:

Moderate and high-end consoles typically have a numerical


value next to the symbol. The numerical value represents the
frequency at which the filter begins.

Meters
Various styles of meters exist on mixers and for audio
production in general. On a mixer, the collective area of the
meters across all of the channels is referred to as the meter
bridge. Three common meters that may be found on a mixer are
VU, Peak, and RMS.

VU meters display the level of perceived loudness on a channel


or the stereo bus, with signals far across the zero point likely to
distort. Peak meters, which are most familiar in modern times,
indicate the loudest part of a signal at any instant. RMS meters
display the average loudness of a channel or the stereo bus,
indicating the dynamic range of the signal when compared to the
zero point.

The right mixer in the right hands can become powerful. Make
sure to practice and purchase wisely.

Digital vs. Analog

Audio mixers are manufactured in either analog or digital form.


Each of these console types has its own set of advantages and
disadvantages, which are directly related to one another.

The biggest advantage digital consoles hold over an analog


mixer is instant “recall.” This means a mix (or setup) can be
reloaded to the exact parameters from when it was last saved.
Every knob, switch, button, and fader will snap to its saved
position. Analog consoles have to be recalled manually, meaning
each knob, button, switch, and fader has to be documented and
returned to its original position by hand. This can be a time-
consuming and tedious process.

The main advantage an analog desk holds over a digital mixer is


“summing.” Summing is the process of combining signals from all
the channels using analog circuitry: iron, wires, faders and
electrical components. Analog summing adds a desirable
dimension to signal and is very familiar to listeners.

Digital mixers do not sum signal in the same way: they are
merely processing the signal using digital, binary code (1’s and
0’s). The sonic texture of analog summing is so popular that
devices called “summing mixers” have become very popular in
home and small recording studios.

A summing mixer can be thought of as an analog console minus


the channels and features. They most commonly take the form of
a unit that has sixteen analog inputs. In some summing mixers,
each input has its own volume and pan pots. Signal passes
through the analog circuitry and is summed to a physical stereo
bus in the same fashion as an analog console, adding a
desirable quality to the vocals and/or instruments

Audio Control and adjustment in Video Camera:


Audio Level:
Audio level is usually measured in Decibels. Audio level is 0dB
at its loudest. 0 decibels (dB) is the maximum level, and a quieter
level is measured as a negative number such as -15dB. The
noise floor is effectively silence. Low-level background noise is
referred to as the noise floor.
Audio level is usually measured in Decibels. 0 decibels (dB) is
the maximum level, and a quieter level is measured as a
negative number such as -15dB. Low-level background noise is
referred to as the noise floor. Below a certain level, the audio
signal is quieter than the noise floor, and is effectively silence.
Sound is made up of three basic elements:
Frequency: how fast the vibrations are occurring.
Intensity: how loud the sound is.
Timbre: the sound's quality.
Audio levels are incredibly important when creating content - poor
audio levels will ruin your footage no matter how incredible it
is. Audio can sometimes be overlooked as you spend most of
your time focusing on an incredible shot or colour grading. So,
sometimes you just export and settle for average audio.

How to Set Audio Levels for Video


Bad sound can easily ruin good footage. Use these tips when it
comes time to set audio levels for video and film projects.
As a filmmaker and videographer you probably spend a lot of time
trying to get the most beautiful shot possible on set. Then
when you sit down to edit the video, more time is spent color
correcting and grading. Then it’s time to move on to exporting,
where you pick the perfect codec that will output good
colors while minimizing compression.
However, when it comes to audio there’s usually not much more
effort put into it than simply, “sounds good”.
Audio is probably the most overlooked and under-appreciated
aspect of the filmmaking process. Audiences can forgive subpar
cinematography, but subpar audio will make even the most loyal
audiences disengaged. There seems to be a lot of misinformation
out there about the correct way to level audio in videos, so let’s
take a look at how to correctly set audio levels for video in the
modern industry.
Correct Audio Levels

There’s a lot of debate online about the right way to level audio,
but there’s one thing that everyone can agree on: you should
never go above 0db. Even though your NLE may allow you to
raise your audio levels all the way up to 6db, it’s advised by
virtually everyone that you should keep your audio below 0db.
This is because your audio will begin to clip after 0db, resulting in
horrible distortion.
Think about it as a ceiling. As soon as your audio peaks and hits
the ceiling, it will begin to get distorted. This is why explosions
in hollywood movies sound awesome and explosions in bad
YouTube videos sound like muffled junk. Most major NLEs and
audio editing software will have red indicators pop up anytime
your audio goes above 0. If you’re getting this warning, then your
audio is too loud. For safety, no peaks should be above -6db.
Most video editors agree that the overall audio level of your audio
mix (all of your audio combined) should normalized between -
10db to -20db. I personally level my videos around -12db with
occasional peaks to -8db. The trick here is to stay away from 0db
as best you can. Experts differ on the exact numbers — because
there are no exact numbers. Editing audio is a lot like editing
video. At the end of the day, your ears should be your guides if
you’ve adhered to the technical rules.
Some people say that if you are creating a video for the web, then
you should get closer to 0db… but it all depends on the video you
are producing. If you’re creating a film, you might want to have
more range between your average mix level and your peaks. If
you’re doing a simple web commercial, it might make more sense
to have less range and a higher overall average level. You’ll figure
out what works for you over time.
Recommended Levels
Here’s a few of my recommended level suggestions.
Overall Mix Level: -10db to -14db
Dialogue: -12db to -15db
Music: -18db to -22db
Sound Effects: -10db to -20db with occasional spikes up to -8db.
These recommendations are a little on the quiet side when
compared to some industry experts, but I find it much better for an
audience to turn up the volume than run the risk of peaking or
distortion.
Before you finalize your film or video, take off your fancy studio
headphones and go play your video on the platform in which
your audience will be watching it. For example, if you’re
working on a TV commercial, try watching your video on a TV to
see how it sounds . It’s not uncommon for music producers to
listen to their tracks in a car before they sign off on distribution.
The same should be true for you. Whether it’s on a laptop,
iPhone, or TV, listen to your video on the medium in which your
audience will be watching and you’ll quickly find audio quirks.
It should also be noted that film distributors and TV broadcasters
often have their own standards for audio levels, so you should
absolutely consult with them before setting your audio levels. In
NTSC countries, for example, peak audio levels must not exceed
-10db with occasional -8db spikes, whereas PAL countries must
not exceed -9db.
Do I Need to Use Bars and Tone?

You’ve probably seen color bars on a local TV station or at the


beginning of an old tape recording. Bars and tone are used to
calibrate a system to optimally playback your video or film. Bars
are for color, so we’ll cover those in another article. Tone,
however, exclusively deals with audio and it’s really simple to
understand. Tone is just a reference engineers use to correctly
output your video at the appropriate level. And it sounds like one
long beep.
Some people say that tone should be set to the average loudness
of your video, but this is only an opinion. In fact, there is no set-in-
stone standard for the loudness at which tone should be set it all
depends on the broadcasters preferences. For example FCPX’s
tone default is -12db and Premiere Pro’s is -20db. The trick is to
simply keep your tone decibels between -6db and -
24db. Truthfully, in most circumstances, you probably won’t even
need to put bars and tone at the start of your video.
Good Audio is More Than Levels
While levels do play a big part when it comes to creating
professional audio, there’s a lot more that needs to be considered
and adjusted to get the best audio possible. Effects like
compressors, limiters, noise reducers, and (most importantly)
EQ will all help contribute to a well balanced and professional
sounding video.

Audio Channel:-

An audio channel is defined as the representation of sound


coming from or going to a single point. For example, a single
microphone can be used to produce one channel of audio, while a
single audio speaker can also accept one channel of audio. To
this point, a digital audio file can have numerous channels of data.
What Is Two-Channel Audio?

Sound is separated into channels, and with two channels


instead of one, you'll hear more depth to your music. Also
called stereo (as opposed to mono), sound is separated into left
and right speakers. You'll use two speakers positioned a distance
apart, facing the listener.
What is an Audio Channel?
A channel is a representation of sound coming from or going to a
single point. A single microphone can produce one channel of
audio, and a single speaker can accept one channel of audio, for
example.
A digital audio file can contain multiple channels of data. Music
that is mixed for headphone listening is saved as a file with two
channels - one sent to the left ear, one sent to the right, while
surround-sound movie audio is often mixed for 6 channels.
What does this have to do with my recorders?
Many of our recorders have the ability to record on two separate
channels, including all SM2s, all SM3s, and the SM4 (non-
ultrasonic). These channels are marked as channel 0, or the left
channel, and channel 1, or right. When you configure these units
to record in stereo, the recordings it saves will contain two
channels. If you listen to these files in a conventional audio
player, you will hear both channels simultaneously from your left
and right speakers. If you open the files in Kaleidoscope, you can
switch between viewing and hearing the left and right channels.
Using Kaleidoscope, you can also split these stereo files into
two mono, single-channel files.
Because the Song Meters listed above can only record on two
channels maximum, plugging in an external microphone will
override one of the internal microphones on an SM3 or SM4. If
you plug one SMM-A2 into channel 0 of an SM4, the left channel
will be recorded from that SMM-A2, and the right channel will be
recorded from the right-hand built-in microphone if the recorder is
configured for stereo recording.
The following recorders only have one recording channel, and will
always produce mono files: SM4Bat FS, SM4Bat ZC, Echo Meter
Touch, Echo Meter EM3, SMZC.
Audio Channel
Audio Mixer for audio recording

8 Channels - Audio Mixer for audio recording


In – Camera Editing:-
In-camera editing is a technique where, instead of editing the
shots in a film into sequence after shooting,
the director or cinematographer instead shoots the sequences in
strict order. The resulting "edit" is therefore already complete
when the film is developed.
The process takes a lot of planning so that the shots are filmed in
the precise order they will be presented. However, some of this
time can be reclaimed, as there is no editing, cutting out or
reordering scenes later on. When the last scene is filmed by the
director or cinematographer, the production is completely finished.
A benefit of the technique, largely now irrelevant due to the rise
of digital video, is a reduction in the cost of the production. When
the cost of film was a significant fraction of the budget, filmmakers
used this technique to optimize film usage.
Because of its apparent simplicity, in-camera editing is also
popular with new students who may lack experience with editing,
or who want to skip the editing step. It can also be a very
educational process because of time and organizational skills that
are required. The discipline required to plan out each shot is a
useful pedagogical technique. Many introductory video production
courses cover the topic of in-camera editing for this very purpose.
The technique may also be used to limit directing and editing
interference in a production (often on the part of producers or
financiers) because the film exists only as shot, with no options
for editing. Any subsequent editing would require costly reshoots
and pick-ups.
Finally, if the filmmaker does not have access to film
editing equipment (notably, a non-linear editing system), then in-
camera editing may be the only available option.
File Formats:- All General :-
 MP4.
 MP3.
 MOV.
 WMV.
 AVI.
 AVCHD.
 FLV, F4V and SWF.
 MKV.
 WEBM or HTML5.
 HTML is the standard markup language for creating Web pages.
What is HTML?

 HTML stands for Hyper Text Markup Language


 HTML is the standard markup language for creating Web
pages
 HTML describes the structure of a Web page
 HTML consists of a series of elements
 HTML elements tell the browser how to display the content
 HTML elements label pieces of content such as "this is a
heading", "this is a paragraph", "this is a link", etc.
 Hypertext Markup Language (HTML) is the primary language
for developing web pages.
 HTML5 is a new version of HTML with new
functionalities with markup language with Internet
technologies. HTML does not have support for video
and audio but, HTML5 supports both video and audio.

Different Types of Files with full form


 JPEG (Joint Photographic Experts Group)
 PNG (Portable Network Graphics)
 GIF (Graphics Interchange Format)
 PDF (Portable Document Format)
 SVG (Scalable Vector Graphics)
 MP4 (Moving Picture Experts Group)
 MP3 ( MP3 is an acronym for MPEG Audio Layer 3. Type
of File Format. It is a type of container for digital
multimedia. It is a format for audio coding and storage.

Video file format and codec basics.

Because video files can be large, programmes called codecs


were developed to make them easier to store and share. Codecs
encode data to compress it for storing and sharing. Then they
decode that data to decompress it for viewing and editing. The
most common codec for video compression is H.264 or AVC.

Audio file formats or file extensions are the containers or


wrappers for these codecs. As with lossy audio file formats, most
video formats lose data in compression. Which format you choose
depends on the balance you want to strike between quality and
ease of use.
Understand the top video file extensions.

These are the most common digital video formats and their most
frequent uses.

MP4
MP4 (MPEG-4 Part 14) is the most common type of video file
format. Apple’s preferred format, MP4 can play on most other
devices as well. It uses the MPEG-4 encoding algorithm to store
video and audio files and text, but it offers lower definition than
some others. MP4 works well for videos posted on YouTube,
Facebook, Twitter and Instagram.

MOV
MOV (QuickTime Film) stores high-quality video, audio and
effects, but these files tend to be quite large. Developed for
QuickTime Player by Apple, MOV files use MPEG-4 encoding to
play in QuickTime for Windows. MOV is supported by Facebook
and YouTube and it works well for TV viewing.

WMV
WMV (Windows Media Viewer) files offer good video quality and
large file size like MOV. Microsoft developed WMV for Windows
Media Player. YouTube supports WMV and Apple users can view
these videos, but they must download Windows Media Player for
Apple. Keep in mind you can’t select your own aspect ratio in
WMV.

AVI
AVI (Audio Video Interleave) works with nearly every web browser
on Windows, Mac and Linux machines. Developed by Microsoft,
AVI offers the highest quality but also large file sizes. It is
supported by YouTube and works well for TV viewing.

AVCHD
Advanced Video Coding High Definition is specifically for high-
definition video. Built for Panasonic and Sony digital camcorders,
these files compress for easy storage without losing definition.

FLV, F4V and SWF


Flash video formats FLV, F4V and SWF (Shockwave Flash) are
designed for Flash Player, but they’re commonly used to stream
video on YouTube. Flash is not supported by iOS devices.
MKV
Developed in Russia, Matroska Multimedia Container format is
free and open source. It supports nearly every codec, but it is not
itself supported by many programmes. MKV is a smart choice if
you expect your video to be viewed on a TV or computer using an
open-source media player like VLC or Miro.

WEBM or HTML5
These formats are best for videos embedded on your personal or
business website. They are small files, so they load quickly and
stream easily.

MPEG-2
If you want to burn your video to a DVD, MPEG-2 with an H.262
codec is the way to go.

Editing and exporting video files.


Whether you shoot your footage with a DSLR camera, Cinema
DNG or other HD video camera, you can work with your native
camera format in Adobe Premiere Pro. With lightweight workflows
and seamless integration with other Adobe apps, Premiere Pro
allows you to create the video you want, even on mobile
workstations. Once you’ve made your final cut, you can export to
the latest broadcast formats.

The best audio format types


Artists and engineers who send audio files back and forth
prefer not to use lossy formats, because the files degrade
every time they're exported.
 MP3. MP3 (MPEG-1 Audio Layer III) is the most popular of the
lossy formats.
 AAC.
 Ogg Vorbis.
 FLAC.
 ALAC.
 WAV.
 AIFF.
 DSD.
Lossy formats.
Lossy audio formats lose data in the transmission. They don’t
decompress back to their original file size, so they end up smaller
and some sound waves are lost. Artists and engineers who send
audio files back and forth prefer not to use lossy formats, because
the files degrade every time they’re exported.

MP3
MP3 (MPEG-1 Audio Layer III) is the most popular of the lossy
formats. MP3 files work on most devices and the files can be as
small as one-tenth the size of lossless files. MP3 is fine for the
consumer, since most of the sound it drops is inaudible, but that’s
not the case when it comes to bit depth. “MP3 files can only be up
to 16-bit, which is not what you want to be working in,” says
producer, mixer and engineer Gus Berry. “You want to be working
in at least 24-bit or higher when recording and mixing.”

AAC
Advanced Audio Coding or AAC files (also known as MPEG-4
AAC), take up very little space and are good for streaming,
especially over mobile devices. Requiring less than 1 MB per
minute of music and sounding better than MP3 at the same
bitrate, the AAC format is used by iTunes/Apple Music, YouTube
and Android.

Ogg Vorbis
Ogg Vorbis is the free, open-source audio codec that Spotify
uses. It’s great for streaming, but the compression results in some
data loss. Experts consider it a more efficient format than MP3,
with better sound at the same bitrate.
Lossless formats.
These files decompress back to their original size, keeping sound
quality intact. Audio professionals want all of the original sound
waves, so they prefer lossless. These files can be several times
larger than MP3s. Lossless bitrates depend on the volume and
density of the music, rather than the quality of the audio.

FLAC
Free Lossless Audio Codec offers lossless compression and it’s
free and open-source.

ALAC
Apple’s Lossless Audio Codec allows for lossless compression,
but it works only on Apple devices.

Uncompressed formats.
These files remain the same size from origin to destination.

WAV
WAV (Waveform Audio File) retains all the original data, which
makes it the ideal format for sound engineers. “WAV has greater
dynamic range and greater bit depth,” creative producer and
sound mixer Lo Boutillette says of her preferred format. “It’s the
highest quality,” Berry agrees. “It can be 24-bit, 32-bit, all the way
up to 192kHz sample rate and even higher these days.” If you’re
collaborating and sending files back and forth, WAV holds its time
code. This can be especially useful for video projects in which
exact synchronisation is important.

AIFF
Originally created by Apple, AIFF (Audio Interchange File Format)
files are like WAV files in that they retain all of the original sound
and take up more space than MP3s. They can play on Macs and
PCs, but they don’t hold time codes, so they’re not as useful for
editing and mixing.

DSD
Direct Stream Digital is an uncompressed, high-resolution audio
format. These files encode sound using pulse-density modulation.
They are very large, with a sample rate as much as 64 times that
of a regular audio CD, so they require top-of-the-line audio
systems.

PCM
Pulse-Code Modulation, used for CDs and DVDs, captures
analogue waveforms and turns them into digital bits. Until DSD,
this was thought to be the closest you could get to capturing
complete analogue audio quality.

A coda on digital audio formats.


If you’re listening to spoken word recordings or you’re a casual
listener who’s OK with unoptimised music files, you can go with a
compressed format and save space in your music library. If you
have more educated ears and expensive audio equipment, you
may want lossless compression for its combined space-saving
and fidelity. If you’re recording or manipulating audio or setting it
to video, always go with lossless or uncompressed.

Graphics File Formats:


There are three file formats for graphics used on the web: JPG,
GIF, and PNG. Each of these file formats are designed with a
specific purpose in mind, so it is important to understand the
differences when we use them in our websites.
JPG
Used for photographs or any type of image with smooth
transitions between colors. Does not support transparency.
PNG
Used for images with flat colors and hard edges, such as
logos, logotypes, and illustrations without gradients. Can
have either single or multiple levels of transparency.
GIF
Older format. Don't generally need to use it, but know that it
exists.

Graphic images are stored digitally using a small number of


standardized graphic file formats, including bit map, TIFF, JPEG,
GIF, PNG; they can also be stored as raw, unprocessed data.
Definition: Graphic images are stored digitally using a small
number of standardized graphic file formats, including bit map,
TIFF, JPEG, GIF, PNG; they can also be stored as raw,
unprocessed data.
There are likely billions of graphic images available on the World
Wide Web, and with few exceptions, almost any user can view
any of them with no difficulty. This is because all those images
are stored in what amounts to a handful of file formats. Before
discussing the principal graphics file formats, however, we need
to review the two fundamental types of graphics: raster and
vector.
A raster image is like a photo in your newspaper. Look closely
and you’ll see it’s made up of equally spaced round dots in
several distinct colors. But if you look at an ad featuring a line
drawing or, better yet, a banner headline, you won’t see an
interrupted line of dots but a solid image bounded by smooth
curves. Those are vector graphics. Many graphics are created as
vector graphics and then published as raster images.
Others File Formats
Most graphics that we see on-screen, and many that are printed
on paper, are actually structured as rectangular grids of pixels or
colored dots. A full-color image requires more color information
than a black-and-white image. Some types of graphics use
geometric functions that allow them to be scaled up or down in
size.

One final distinction should be made between how an image is


stored (its graphic file format) and how it is generated for viewing
by the end user.

Most devices that output images, whether they be monitors, TVs


or ink-jet printers, actually produce raster output. They create
successive minuscule lines, each consisting of a line of dots of
different colors (and perhaps sizes) that end up on the final page
as both images and letters. Before the advent of modern high-
resolution displays, there were CRT devices that actually
produced true vector output, but those are mainly history now. So
we need to provide our monitors or printers with sequences of all
those colored dots. A graphic that is already rasterized will save
time and electrons because it doesn’t need further processing by
the computer.

BMP
The simplest way to define a raster graphic image is by using
color-coded information for each pixel on each row. This is the
basic bit-map format used by Microsoft Windows. The
disadvantage of this type of image is that it can waste large
amounts of storage. Where there’s an area with a solid color, for
example, we don’t need to repeat that color information for every
new contiguous pixel. Instead, we can instruct the computer to
repeat the current color until we change it. This type of space-
saving trick is the basis of compression, which allows us to store
the graphic using fewer bytes. Most Web graphics today are
compressed so that they can be transmitted more quickly. Some
compression techniques will save space yet preserve all the
information that’s in the image. That’s called “lossless”
compression. Other types of compression can save a lot more
space, but the price you pay is degraded image quality. This is
known as “lossy” compression.

TIFF
Most graphics file formats were created with a particular use in
mind, although most can be used for a wide variety of image
types. Another common bit-mapped image type is Tagged Image
File Format, which is used in faxing, desktop publishing and
medical imaging. TIFF is actually a “container” that can hold bit
maps and JPEGs and allows (but doesn’t require) various types
of compression.
PDF
PDF is an abbreviation that stands for Portable
Document Format. It's a versatile file format created by
Adobe that gives people an easy, reliable way to present
and exchange documents - regardless of the software,
hardware, or operating systems being used by anyone
who views the document.

That’s all for this course – Best of Luck

Buddhi Prakash Kukreti

*************************************************************************************

You might also like