TV Module 1
TV Module 1
Module 1
Fundamentals of television Engineering
Television engineering is a field that encompasses the technology and principles behind the
transmission, reception, and display of television signals. Here are some fundamental concepts
and components of television engineering:
1. **Video Signals:** Television engineering deals with the encoding, transmission, and
decoding of video signals. These signals represent the visual information that makes up the TV
program. The most common video signal standards are analog (e.g., NTSC, PAL, SECAM) and
digital (e.g., ATSC, DVB).
2. **Resolution:** Resolution refers to the number of pixels or lines used to create an image on
the screen. Higher resolution results in a clearer and more detailed picture. Common resolutions
include 720p, 1080p, and 4K (2160p).
3. **Frame Rate:** Frame rate is the number of individual images (frames) displayed per
second. Common frame rates include 30 frames per second (fps) and 60 fps in the United
States and 25 fps and 50 fps in Europe.
4. **Color Encoding:** Television uses various color encoding systems to represent color
information. In analog TV, this includes NTSC, PAL, and SECAM, while digital TV uses color
spaces like RGB and YUV.
5. **Transmission:** Television signals are transmitted over the air (terrestrial broadcasting),
through cable systems, or via satellite. The choice of transmission method depends on factors
like signal coverage and quality.
6. **Reception:** Television sets, or receivers, are used to capture and decode transmitted
signals. Modern TVs have built-in tuners and processing circuits to display the received content.
7. **Display Technology:** Television sets use various display technologies, including CRT
(cathode-ray tube), LCD (liquid crystal display), OLED (organic light-emitting diode), and more
recently, MicroLED and MiniLED.
8. **Sound:** Television engineering also encompasses audio signals, which are typically
transmitted along with the video. The sound can be in stereo or surround sound formats.
9. **Broadcasting Standards:** Different regions and countries may have their own broadcasting
standards and frequencies. Engineers need to ensure compatibility with these standards when
designing broadcasting equipment.
10. **Digital Television:** The transition from analog to digital television has been a significant
development in television engineering. Digital TV offers improved picture and sound quality, as
well as more efficient use of bandwidth.
11. **Compression:** Digital television often relies on compression algorithms to reduce the size
of video and audio data for transmission and storage. Common compression standards include
MPEG-2, H.264 (AVC), and H.265 (HEVC).
12. **Interactivity:** Modern television engineering also involves interactivity features, such as
smart TVs with internet connectivity and the ability to run applications, as well as interactive
content and services.
13. **Signal Processing:** Television engineers use various signal processing techniques to
enhance picture and sound quality, reduce noise, and correct errors in the transmission and
reception of signals.
14. **Regulatory Compliance:** Television engineering must adhere to regulatory standards and
guidelines set by government agencies to ensure safe and fair broadcasting practices.
These fundamental concepts and components are at the core of television engineering, which is
continually evolving to deliver higher-quality and more immersive viewing experiences to
audiences around the world.
Scanning Mechanism
The scanning mechanism is a critical component of television systems, responsible for capturing
and displaying images on a screen. It involves the systematic scanning of the screen to create a
coherent and visually appealing image. There are two primary scanning methods used in
television: interlaced scanning and progressive scanning. Let's delve into each of these
scanning mechanisms in more detail:
1. **Interlaced Scanning:**
- Interlaced scanning is a method that divides each frame into two fields: an odd field and an
even field.
- During the odd field, only the odd-numbered lines of the image are scanned, starting from
the top and moving down.
- In the even field, only the even-numbered lines are scanned.
- These two fields are scanned rapidly, one after the other, to create a complete frame. The
human eye perceives the fields as a single, cohesive image.
- Interlaced scanning was widely used in older analog television systems like NTSC (used in
North America) and PAL (used in Europe). It helped reduce flicker and provided smoother
motion for moving objects.
- Common interlaced resolutions include 1080i (1920x1080 interlaced) and 480i (720x480
interlaced).
2. **Progressive Scanning:**
- Progressive scanning, also known as non-interlaced scanning, scans each frame
sequentially, line by line, from top to bottom.
- In progressive scanning, there are no separate fields; each frame is a complete image.
- This method provides higher image quality and eliminates interlacing artifacts, making it ideal
for high-definition and modern television systems.
- Common progressive resolutions include 720p (1280x720 progressive) and 1080p
(1920x1080 progressive).
- Progressive scanning is used in most modern digital television systems, including HDTV and
UHD TV.
- The choice between interlaced and progressive scanning depends on factors like the display
technology, desired image quality, and broadcasting standards.
- Interlaced scanning is better suited for CRT (cathode-ray tube) displays, where the rapid
scanning of fields helps reduce flicker.
- Progressive scanning is preferred for digital displays like LCD, OLED, and plasma screens, as
it delivers higher image quality and avoids interlacing artifacts.
- In practice, modern television systems often support both interlaced and progressive scanning
modes to accommodate various content sources and display devices.
- Scanning mechanisms are also closely related to frame rates, which determine how many
frames (complete images) are displayed per second. Common frame rates include 30 fps
(frames per second) and 60 fps for interlaced systems and 24 fps and 60 fps for progressive
systems.
- To transmit color information, a color subcarrier frequency is introduced into the signal. This
subcarrier frequency is much higher than the frequencies used for luminance information.
- For example, in NTSC (used in North America), the color subcarrier frequency is
approximately 3.58 MHz, whereas the luminance frequency is around 3.58 MHz divided by 4, or
about 895 kHz.
3. **Frequency Interleaving:**
- In frequency interleaving, the luminance and chrominance signals are combined in such a
way that their frequency components do not overlap, reducing the chance of interference.
- The luminance signal typically occupies the lower-frequency range, and the chrominance
signal occupies the higher-frequency range.
- By separating these components in frequency, it becomes easier to filter and process them
independently without causing color distortion or degradation in picture quality.
5. **Color Burst:**
- To aid in the accurate demodulation of the color information, a reference signal called the
"color burst" is transmitted during each horizontal blanking interval in the video signal.
- The color burst provides the receiver with a precise reference phase and amplitude
information for the color subcarrier, allowing for the accurate reconstruction of colors.
Frequency interleaving is crucial because it ensures that the luminance and chrominance
components do not interfere with each other, preserving image quality and color fidelity. This
technique is specific to analog television systems like NTSC and PAL, and it's not used in digital
television systems, which use different methods, such as digital color encoding, to transmit color
information.
Aspect Ration
Aspect ratio in television and film refers to the proportional relationship between the width and
height of the screen or image. It determines the shape of the viewing area and how content is
presented to the audience. The aspect ratio is typically expressed as a ratio of two numbers,
representing width to height. Here are the two most common aspect ratios used in television
and film:
1. **4:3 (1.33:1):**
- The 4:3 aspect ratio was the standard for early television sets and analog TV broadcasts. It
is nearly square, with the width of the screen being 1.33 times the height.
- This aspect ratio is often referred to as "standard" or "fullscreen." It was widely used for
standard-definition (SD) content.
- The 4:3 aspect ratio provides a more square-shaped viewing area and was common in
television broadcasts until the transition to widescreen formats.
2. **16:9 (1.78:1):**
- The 16:9 aspect ratio is the standard for modern high-definition (HD) and ultra-high-definition
(UHD) television and widescreen displays.
- It is wider and more rectangular compared to the 4:3 aspect ratio, with the width being 1.78
times the height.
- 16:9 is sometimes referred to as "widescreen" and is commonly used for HDTV
(High-Definition Television) and UHDTV (Ultra-High-Definition Television) broadcasts.
- It is also the aspect ratio used for most computer monitors, laptops, and mobile devices,
making it a standard for digital media content.
Aspect ratio is a critical consideration in the production and display of television and film
content. Here are some important points regarding aspect ratio:
- **Content Adaptation:** When content is created in a specific aspect ratio, it's essential to
consider how it will be displayed. For example, content originally shot in 4:3 may need to be
adapted or cropped for widescreen displays.
- **Letterboxing and Pillarboxing:** To display content with a different aspect ratio than the
screen, you may encounter letterboxing (black bars at the top and bottom) or pillarboxing (black
bars on the sides) to maintain the original aspect ratio.
- **Aspect Ratio Conversion:** Some devices and displays support aspect ratio conversion,
which allows viewers to stretch or zoom content to fill the screen. However, this can distort the
image.
- **Cinematic Aspect Ratios:** In filmmaking, various cinematic aspect ratios are used to create
specific visual effects and storytelling aesthetics. Common cinematic ratios include 1.85:1 and
2.39:1.
- **Adaptive Aspect Ratios:** Some modern displays and content delivery systems support
adaptive aspect ratios, automatically adjusting the display based on the content's original
format.
Understanding aspect ratio is crucial for content creators, broadcasters, and viewers to ensure
that content is presented in the intended format and aspect ratio, providing the best viewing
experience.
Kell Factor
The Kell factor, also known as the Kell factor of reduction or Kell factor of compression, is a term
used in television engineering to describe the correction factor applied when converting video
content between different aspect ratios. It is named after the British engineer Peter Kell. The
Kell factor ensures that when content originally produced in one aspect ratio is displayed or
broadcast in another aspect ratio, the conversion process maintains the correct proportions of
the image.
Here's how the Kell factor works and why it's important in TV engineering:
- When content that was originally created in one aspect ratio needs to be displayed on a
screen or broadcast in a different aspect ratio, a conversion process is required.
- For example, if you have content produced in the traditional 4:3 aspect ratio (standard
definition) but want to display it on a modern widescreen 16:9 TV (high definition), aspect ratio
conversion is necessary.
- If content is converted without any correction, it may result in a distorted image. In the
example mentioned above, if 4:3 content is stretched to fill a 16:9 screen without correction, it
can lead to characters and objects appearing unnaturally wide.
3. **Kell Factor Application:**
- The Kell factor is applied during the aspect ratio conversion to ensure that the content
retains its correct proportions.
- It is typically expressed as a decimal value, with a common value being 0.7. This means that
when converting from 4:3 to 16:9, the content is scaled horizontally by a factor of 0.7 to prevent
excessive stretching.
- The Kell factor effectively reduces the width of the content while maintaining its original
height, resulting in a visually pleasing conversion.
- By applying the Kell factor, content creators and broadcasters can ensure that the converted
content maintains its intended look and avoids distortion.
- While the Kell factor doesn't fill the entire widescreen display with the content, it prevents
unnatural stretching and keeps the original framing intact.
In summary, the Kell factor is a correction factor used in television engineering to maintain the
correct proportions of video content when converting it between different aspect ratios. It helps
preserve the original look of the content and prevents distortion when displayed on screens with
different aspect ratios, such as from older 4:3 TVs to modern 16:9 widescreen displays.
The Vidicon camera tube was one of the earliest types of camera tubes used in television and
video cameras. It was developed in the mid-20th century and was widely used until the late 20th
century. The Vidicon tube consists of several key components:
1. **Photoconductive Surface:** The front of the Vidicon tube has a photosensitive surface,
typically made of a material like antimony trisulfide (Sb2S3) or lead oxide (PbO). This surface is
exposed to light when capturing an image.
2. **Electron Gun:** Inside the Vidicon tube, there is an electron gun that emits a beam of
electrons towards the photosensitive surface.
3. **Target Plate:** Behind the photosensitive surface is a metal target plate. When electrons
strike this plate, they release secondary electrons.
4. **Image Formation:** When light enters the Vidicon tube through the lens, it strikes the
photosensitive surface, causing changes in its electrical conductivity. These changes are
proportional to the intensity of the light and create an electron image on the photosensitive
surface.
5. **Electron Scanning:** The electron beam from the electron gun scans across the
photosensitive surface, releasing secondary electrons where the light was absorbed. The
varying conductivity of the surface modulates the current of these secondary electrons.
6. **Signal Output:** The modulated electron current is collected and used as an electrical video
signal representing the image. This signal can be amplified and further processed for display or
recording.
The Plumbicon camera tube is an advancement over the Vidicon tube and was developed to
improve image quality and sensitivity. It was commonly used in professional television cameras
during the mid-20th century. Here are the key components and features of the Plumbicon tube:
1. **Photoconductive Target:** The key improvement in the Plumbicon tube was the use of a
photoconductive target made of lead oxide (PbO). This material provided better sensitivity to
light compared to the materials used in Vidicon tubes.
2. **Electron Gun:** Similar to the Vidicon, the Plumbicon tube also has an electron gun to
generate an electron beam.
3. **Image Formation:** When light enters the Plumbicon tube and strikes the photoconductive
target, it causes variations in its electrical resistance. These changes in resistance are directly
proportional to the amount of light falling on different parts of the target, effectively forming an
electrical representation of the image.
4. **Electron Scanning:** The electron gun scans across the photoconductive target, and the
varying resistance modulates the current of the electron beam.
5. **Signal Output:** The modulated electron current is collected and transformed into an
electrical video signal. Plumbicon tubes offered improved sensitivity, which made them suitable
for low-light conditions and outdoor broadcasting.
While both Vidicon and Plumbicon tubes played significant roles in early television and video
production, they have been largely replaced by more modern imaging technologies, such as
CCD (Charge-Coupled Device) and CMOS (Complementary Metal-Oxide-Semiconductor)
sensors, which offer higher resolution, better image quality, and reliability while being more
compact and durable.
Image Quality Acceptable, but less detailed Better, sharper, improved color
Please note that both Vidicon and Plumbicon camera tubes have been largely replaced
by CCD and CMOS sensors in modern camera technology due to the superior image
quality, lower maintenance requirements, and reduced fragility of these
semiconductor-based sensors.
2. **Light Sensing:** When light enters the camera's lens, it strikes the photosensitive surface of
the CCD. The energy from the light interacts with the semiconductor material in each pixel,
causing the generation of electron-hole pairs.
3. **Charge Accumulation:** The generated electron-hole pairs represent the intensity of the
light hitting each pixel. The electrons are accumulated in potential wells within the pixel
structure. The longer the exposure to light, the more electrons accumulate, resulting in a
stronger electrical charge.
4. **Charge Transfer:** After an exposure period, the accumulated charge (electrons) in each
pixel is transferred sequentially from one row to another through a process called "charge
transfer." This row-by-row transfer is initiated by applying voltages to the CCD's electrodes.
5. **Signal Readout:** Once the charge has been shifted to the edge of the sensor, it is read out
sequentially from the sensor by an analog-to-digital converter (ADC). The ADC converts the
analog charge values into digital values, representing the pixel intensities.
6. **Image Processing:** The digital values are then processed by the camera's image
processor to create a digital image. The processor can apply various adjustments, including
white balance, exposure correction, color correction, and compression.
1. **Pixel Array:** A CMOS image sensor also consists of an array of photosensitive pixels,
similar to a CCD. However, in CMOS sensors, each pixel has its own amplification and readout
circuitry.
2. **Light Sensing:** When light enters the camera's lens, it strikes the photosensitive surface of
the CMOS sensor, generating electron-hole pairs in each pixel.
3. **Charge Amplification:** Unlike CCDs, CMOS sensors amplify the charge within each pixel
individually using built-in amplifiers. This amplification process boosts the signal and enhances
sensitivity.
4. **Signal Readout:** After amplification, the signal from each pixel is read out independently
by the camera's circuitry. There is no need for row-wise transfer as in CCDs.
5. **Analog-to-Digital Conversion:** The amplified analog signals from each pixel are converted
into digital values by ADCs integrated into the CMOS sensor. This conversion occurs at the
pixel level.
6. **Image Processing:** As with CCDs, the digital values are processed by the camera's image
processor to create a digital image. Various adjustments and corrections can be applied during
this processing stage.
In summary, both CCD and CMOS sensors capture light and convert it into electrical signals, but
they differ in how they accumulate, transfer, and read out those signals. CMOS sensors have
become more popular in recent years due to their lower power consumption, faster readout
speeds, and compatibility with integrated circuit technologies, allowing for more compact and
versatile camera designs.
B/W Picture Tube
A black and white (B/W) picture tube, also known as a monochrome cathode-ray tube (CRT), is
a display technology used in older television sets and computer monitors to produce grayscale
images. It operates on the principle of using electron beams to stimulate a phosphorescent
screen, creating variations in brightness to form the image. Here are the key components and
principles of a B/W picture tube:
1. **Electron Gun:**
- The heart of a B/W picture tube is the electron gun, which is located at the rear of the tube.
- The electron gun emits a focused stream of electrons when an electric current is applied to
it. These electrons form the "electron beam."
2. **Phosphorescent Screen:**
- The front of the CRT has a phosphorescent screen, also called the "faceplate" or "viewing
screen."
- This screen is coated with a layer of phosphor compounds. In the case of a B/W CRT, these
phosphors typically emit white or shades of gray when excited by electrons.
3. **Beam Scanning:**
- The electron gun scans the surface of the phosphorescent screen in a pattern known as a
"raster scan." This scan is similar to reading lines of text from left to right, top to bottom.
- The electron beam is focused and directed to specific points on the screen, pixel by pixel,
row by row. The rapid movement of the electron beam gives the appearance of continuous
motion to the human eye.
4. **Pixel Illumination:**
- When the electron beam strikes the phosphorescent screen, it imparts energy to the
phosphor compounds, causing them to become excited.
- As the excited phosphors return to their stable state, they emit light. The brightness of each
pixel is determined by the intensity of the electron beam and the properties of the phosphors.
5. **Image Formation:**
- The electron beam scans across the screen, turning on and off rapidly as it encounters
different pixels.
- By varying the intensity of the electron beam as it hits different points on the screen, various
shades of gray are created, forming the image.
- When a pixel is illuminated brightly, it appears white, while dimly lit pixels appear gray or
black.
6. **Image Reproduction:**
- To display an entire image, the electron beam scans the entire screen, refreshing it multiple
times per second (usually 50-60 times per second). This rapid refreshing rate creates the
illusion of continuous motion, allowing moving images to be displayed.
In summary, a B/W picture tube operates on the principle of using an electron gun to emit
focused electron beams onto a phosphorescent screen coated with phosphor compounds that
emit white or shades of gray when excited. By varying the intensity of the electron beam as it
scans the screen, grayscale images are formed, and when displayed rapidly, they create the
illusion of motion, allowing for the reproduction of video and images in black and white. B/W
CRTs were a fundamental technology in the early days of television and computer displays.
3. **Beam Scanning:**
- The three electron guns scan the screen simultaneously, each emitting an electron beam
focused on a specific color phosphor dot within a color triad.
- The scanning process is similar to that of a B/W CRT, with the electron beams moving
rapidly in a raster pattern across the screen.
5. **Color Synthesis:**
- The human eye perceives color based on the additive combination of red, green, and blue
light. By precisely controlling the intensity of the electron beams, the CRT can synthesize a full
range of colors, including secondary and tertiary colors.
6. **Image Formation:**
- As with B/W CRTs, the electron beams rapidly scan the entire screen to display an entire
image.
- The pixel-by-pixel illumination of the screen with various combinations of red, green, and
blue light creates a full-color image.
7. **Image Reproduction:**
- The CRT refreshes the image on the screen multiple times per second, typically 50-60 times
per second, to create the illusion of continuous motion for video content.
In summary, a color picture tube operates on the principle of using three electron guns (red,
green, and blue) to emit focused electron beams onto a phosphorescent screen with color
triads. The electron beams excite the phosphor dots within the triads, causing them to emit red,
green, or blue light. By controlling the intensities and positions of these electron beams, a color
CRT can produce a wide range of colors and display full-color images and video. Color CRTs
were the standard for television and computer displays for many decades before being largely
replaced by newer display technologies, such as LCD and OLED.
Each of these television standards was developed independently and has its own
characteristics. The primary difference between them lies in frame rates, color encoding
methods, and color subcarrier frequencies. Compatibility issues arise when broadcasting or
displaying content from one standard on a TV system designed for another. Modern digital
television and video standards have largely supplanted these analog standards, but they still
have historical significance and legacy equipment in use in some regions.