Display Resolution Color Space Computer File System: Dvcpro

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

For transmission, there is a physical connector and signal protocol ("video connection standard"

below). A given physical link can carry certain "display standards" which specify a particular refresh
rate, display resolution, and color space. There are a number of analog and digital tape formats,
though digital video files can also be stored on a computer file systemwhich have their own formats.

The original Betacam format was launched on August 7, 1982. It is an analog component


video format, storing the luminance, "Y", in one track and the chrominance, on another as alternating
segments of the R-Y and B-Y components performing Compressed Time Division Multiplex, or CTDM.
[1]
 This splitting of channels allows true broadcast quality recording with 300 lines of horizontal
luminance resolution and 120 lines chrominance resolution (versus ≈30 for Betamax/VHS), on a
relatively inexpensive cassette based format.

n 1986, Betacam SP was developed, which increased horizontal resolution to 340 lines. While the
quality improvement of the format itself was minor, the improvement to the VTRs was enormous, in
quality, features, and particularly, the new larger cassette with 90 minutes of recording time. Beta SP
(for "Superior Performance") became the industry standard for most TV stations and high-end
production houses until the late 1990s. Despite the format's age Beta SP remains a common standard
for video post-production. The recording time is the same as for Betacam, 30 and 90 minutes for S
and L, respectively. Tape speed is slightly slower in machines working in the 625/50 format,
increasing tape duration of one minute for every five minutes of run time. So, a 90 minute tape will
record 108 minutes of video in PAL.

MII is a professional videocassette format developed by Panasonic in 1986 as their answer and


competitive product to Sony's Betacam SP format. It was technically similar to Betacam SP, using
metal-formulated tape loaded in the cassette, and utilizing component video recording.

MII is sometimes incorrectly referred to as M2; the official name uses Roman numerals, and is
pronounced "em two". Just as Betacam SP was an improved version of its
predecessor Betacam (originally derived from Betamax) with higher video and audio quality, MII was
an enhanced and improved version of its predecessor as well, the failed M format (originally derived
from VHS). There are two sizes of MII tape, the larger of which is close to VHS size and has a running
time of up to around 90 minutes, the smaller tape is about half the size and runs up to around 20
minutes, and is also the size in which head cleaner tapes were supplied.

DVCPRO
DVCPRO, also known as DVCPRO25, is a variation of DV developed by Panasonic and introduced in
1995 for use in electronic news gathering(ENG).

Unlike baseline DV, DVCPRO uses locked audio and 4:1:1 chroma subsampling for both 50 Hz and
60 Hz variants to reduce generation loss.[8] Audio is available only in the 16-bit/48 kHz variant.

When recorded to tape, DVCPRO uses wider track pitch - 18 μm vs. 10 μm of baseline DV, which
reduces the chances of dropout errors when video is recorded to tape. Two extra longitudinal tracks
provide audio cue and for timecode control. Tape is transported 80% faster compared to baseline DV,
resulting in shorter recording time. Long Play mode is not available.
DVCAM

In 1996 Sony responded with its own professional version of DV called DVCAM.

Like DVCPRO, DVCAM uses locked audio, which prevents audio synchronization drift that may
happen on DV If several generations of copies are made. [9]

When recorded to tape, DVCAM uses 15 μm track pitch, which is 50% wider compared to baseline.
Accordingly, tape is transported 50% faster, which reduces recording time by one third compared to
DV. Because of the wider track and track pitch, DVCAM has the ability to do a frame accurate insert
tape edit, while DV may vary by a few frames on each edit compared to the preview.

Sony Hdd-1000

The HDD-1000 is the tape transport for the Digital HDVS VTR.It requires the use of the HDDP-1000 signal processor.

 Incorporates many of the features of the BVH-3000 including compact size, lightweight, ease of tape threading,
computerized servo control, and front panel operation
 With wide band Y, PB, PR recording, a high quality picture is assured
 Wide band (30MHz) recording system
 Front panel controls for basic simple editing
 One hour recording time with 11.75-inch reel
 Time code editing possible when interfaced with the BVE-910 Editing Control Unit or the BVE-9100 Editing
Control System
 Built-in time code generator/reader
 9-pin Remote Interface
 Special playback modes-JOG: still to ±1/4 times normal-SHUTTLE: still to ±8 times normal
 Eight channels of digital audio
 Bandwidth: DC to 30MHz 0-1.5dB(luminance) \\\X09Spec
 Signal Standard: SMPTE 240M
 Power Requirements: AC-100-120/220-240V Ý10%, 50/60Hz
 Power Consumption: 550W
 Operating Temperature: 5øC to 35øC (41øF to 95øF)
 Storage Temperature: -20øC to 60øC (-4øF to 140øF)
 Humidity: 10%-85% (non-condensing)
 Video tracks: 8
 Audio tracks: 8
 CTL tracks: 1
 T/C tracks: 1
 Cue tracks: 1
 Tape Speed: 80.5 cm/s
 Writing Speed (Relative Speed): 51.5m/s
 Recording Time: 63 min with 11.75 in reel
 Fast Forward/Reverse Speed: Approx 5 minutes
 Recommended Tapes: Sonys 1-inch High Density Tape or equivalent
 Reel Size: NAB Standard, 6.5 in-11.75 in reel
 LINE INPUT: CUE: XLR 3-pin
 LINE OUTPUT: CUE: XLR 3-pin
 MONITOR OUT: R/L: XLR 3-pin
 TO PROCESSOR: CN-1: D-sub 50-pin
 SERIAL REMOTE: REMOTE-1: for BVH-1000/1100 through BKH-2016 D-sub 15-pin
 PARALLEL REMOTE: REMOTE-3: D-sub 50-pin
 Signal System: YPBPR
 Signal-to-Noise Ratio: Better than 56dB (full band, unweighted)
 Quantization: 8 bits
 Sampling Rate: 74.25MHz
 K Factor: <1%, 2T pulse
 Phase Error of Each Component Channel: <3.5 ns
 Frequency Response: 20Hz-20kHz (+0.5/-1.0)db
 Crosstalk: < -80dB at 1kHz (between any two channels)
 TO PROCESSOR: CN-1: D-sub 50-pin
 TO PROCESSOR: CN-2: D-sub 50-pin

NTSC, named for the National Television System Committee, is the analog television system that
is used in most of North America, most of South America (except Brazil, Argentina, Uruguay,
and French Guiana), Burma, South Korea, Taiwan, Japan, the Philippines, and some Pacific island
nations and territories (see map). ATSC replaced much of the analog NTSC television system in the
United States on June 12, 2009. NTSC is also the name of the U.S. standardization body that
developed the broadcast standard.[1] The first NTSC standard was developed in 1941 and had no
provision for color television.

NTSC color encoding is used with the system M television signal, which consists of
29.97 interlaced frames of video per second, or the nearly identical system J in Japan. Each frame
consists of a total of 525 scanlines, of which 486 make up the visible raster. 

PAL, short for Phase Alternating Line, is an analog color television encoding system used
in broadcast television systems in many countries. Other common analogue television systems
are SECAM and NTSC. This page primarily discusses the colour encoding system. See the articles
on broadcast television systems and analogue television for additional discussion of frame rates,
image resolution and audio modulation. 

The basics of PAL and the NTSC system are very similar; a quadrature amplitude
modulated subcarrier carrying the chrominance information is added to the luminance video signal to
form a composite video baseband signal. The frequency of this subcarrier is 4.43361875 MHz for
PAL, compared to 3.579545 MHz for NTSC. The SECAM system, on the other hand, uses a
frequency modulation scheme on its two line alternate colour subcarriers 4.25000 and 4.40625 MHz.

The name "Phase Alternating Line" describes the way that the phase of part of the colour information
on the video signal is reversed with each line, which automatically corrects phase errors in the
transmission of the signal by cancelling them out, at the expense of vertical frame colour resolution.
Lines where the colour phase is reversed compared to NTSC are often called PAL or phase-
alternation lines, which justifies one of the expansions of the acronym, while the other lines are called
NTSC lines.
A minor drawback is that the vertical colour resolution is poorer than the NTSC system's, but since the
human eye also has a colour resolution that is much lower than its brightness resolution, this effect is
not visible.

SECAM, also written SÉCAM (Séquentiel couleur à mémoire,[1] French for "Sequential Color with


Memory"), is an analog color television system first used in France. Just as with the other color
standards adopted for broadcast usage over the world, SECAM is a standard which permits existing
monochrome television receivers predating its introduction to continue to be operate as monochrome
television. Because of this compatibility requirement, color standards added a second signal to the
basic monochrome signal, which carries the color information. The color information is
called chrominanceor C for short, while the black and white information is called the luminance or Y
for short. Monochrome television receivers only display the luminance, while color receivers process
both signals. dditionally, for compatibility, it is required to use no more bandwidth than the
monochrome signal alone; the color signal has to be somehow inserted into the monochrome signal,
without disturbing it. This insertion is possible because the spectrum of the monochrome TV signal is
not continuous, hence empty space exists which can be utilized. This lack of continuity results from
the discrete nature of the signal, which is divided into frames and lines. Analog color systems differ by
the way in which empty space is used. In all cases, the color signal is inserted at the end of the
spectrum of the monochrome signal.

An audio player is a kind of media player for playing back digital audio, including optical discs such
as CDs, SACDs, DVD-Audio, HDCD, audio files and streaming audio.

In addition to VCR-like functions like playing, pausing, stopping, rewinding, and forwarding, some
common functions include playlisting, tagging format support, and equalizer.

Many of the audio players also support simple playback of digital videos in which we can also
run movies.

DTS-HD Master Audio is a lossless audio codec created by Digital Theater System. It was previously


known as DTS++.[1] It is an extension of DTSwhich, when played back on devices which do not
support the Master Audio or High Resolution extension, degrades to a "core" track which is lossy.
DTS-HD Master Audio is an optional audio format for both Blu-ray Disc and HD DVD. DTS-HD Master
Audio has been steadily becoming the standard for Blu-ray lossless audio format.

Free Lossless Audio Codec (FLAC) is an audio compression codec primarily authored by Josh


Coalson. FLAC employs a lossless data compression algorithm. A digital audio recording compressed
by FLAC can be decompressed into an identical copy of the original audio data. Audio sources
encoded to FLAC are typically reduced to 50–60% of their original size. [2]

FLAC is an open and royalty-free format with a free software implementation made available. FLAC


has support for tagging, cover art, and fast seeking. Though FLAC playback support in portable audio
devices and dedicated audio systems is limited compared to formats like MP3,[3] FLAC is supported
by more hardware devices than competing lossless formats like WavPack.

Dolby TrueHD is an advanced lossless multi-channel audio codec developed by Dolby


Laboratories which is intended primarily for high-definition home-entertainment equipment such
as Blu-ray Disc and HD DVD. It is the successor to the AC-3 Dolby Digital surround sound codec
which was used as the audio standard for DVD discs. In this application, Dolby TrueHD competes
with DTS-HD Master Audio, another lossless codec from DTS.

Streaming media is multimedia that is constantly received by and presented to an end-user while


being delivered by a streaming provider. [note 1] The name refers to the delivery method of the medium
rather than to the medium itself. The distinction is usually applied to media that are distributed
over telecommunications networks, as most other delivery systems are either inherently streaming
(e.g., radio, television) or inherently non-streaming (e.g., books, video cassettes, audio CDs). The
verb 'to stream' is also derived from this term, meaning to deliver media in this manner. Internet
television is a commonly streamed medium.

 Datagram protocols, such as the User Datagram Protocol (UDP), send the media stream as a


series of small packets. This is simple and efficient; however, there is no mechanism within the
protocol to guarantee delivery. It is up to the receiving application to detect loss or corruption and
recover data using error correction techniques. If data is lost, the stream may suffer a dropout.
 The Real-time Streaming Protocol (RTSP), Real-time Transport Protocol (RTP) and the Real-
time Transport Control Protocol (RTCP) were specifically designed to stream media over
networks. RTSP runs over a variety of transport protocols, while the latter two are built on top of
UDP.

Peer-to-peer (P2P) protocols arrange for prerecorded streams to be sent between computers. This
prevents the server and its network connections from becoming a bottleneck. However, it raises
technical, performance, quality, and business issues.

Audio Video Interleave (also Audio Video Interleaved), known by its acronym AVI, is a


multimedia container format introduced byMicrosoft in November 1992 as part of its Video for
Windows technology. AVI files can contain both audio and video data in a file container that allows
synchronous audio-with-video playback. Like the DVD video format, AVI files support multiple
streaming audio and video, although these features are seldom used. Most AVI files also use the file
format extensions developed by the Matrox OpenDML group in February 1996. These files are
supported by Microsoft, and are unofficially called "AVI 2.0".AVI does not provide a standardized way
to encode aspect ratio information, with the result that players cannot select the right one
automaticallyOverhead for AVI files at the resolutions and frame rates normally used to encode
standard definition feature films is about 5 MB per hour of video, the significance of which varies with
the application.

Xvid (formerly "XviD") is a video codec library following the MPEG-4 standard, specifically MPEG-4


Part 2 Advanced Simple Profile (ASP). It uses ASP features such as b-frames, global and quarter
pixel motion compensation, lumi masking, trellis quantization, and H.263, MPEG and
custom quantization matrices.

Xvid is a primary competitor of the DivX Pro Codec. In contrast with the DivX codec, which
is proprietary software developed by DivX, Inc., Xvid is free software distributed under the terms of
the GNU General Public License.[1] This also means that unlike the DivX codec, which is only
available for a limited number of platforms,[2] Xvid can be used on all platforms and operating systems
for which the source code can be compiled.
Xvid is not a video format – it is a program (codec) for compressing and decompressing to the MPEG-
4 ASP format. Since Xvid usesMPEG-4 Advanced Simple Profile (ASP) compression, any video that
is encoded with it is termed "MPEG-4 ASP video" – not "Xvid video" – and can therefore be decoded
with all MPEG-4 ASP compliant decoders. Xvid encoded files can be written to a CD or DVD and
played in a DivX compatible DVD player. However, Xvid can optionally encode video with advanced
features that most DivX Certified set-top players do not support.

Advanced Systems Format (formerly Advanced Streaming Format, Active Streaming Format)


is Microsoft's proprietary digital audio/digital video container format, especially meant for streaming
media. ASF is part of the Windows Media framework. The format does not specify how (i.e. with
which codec) the video or audio should be encoded; it just specifies the structure of the video/audio
stream. This is similar to the function performed by the QuickTime, AVI, or Ogg container formats.
One of the objectives of ASF was to support playback from digital media servers, HTTP servers, and
local storage devices such as hard disk drives. The most common file types contained within an ASF
file are Windows Media Audio (WMA) and Windows Media Video (WMV). Note that the file extension
abbreviations are different from the codecs which have the same name. Files containing only WMA
audio can be named using a .WMA extension, and files of audio and video content may have the
extension .WMV. Both may use the .ASF extension if desired.

3GP (3GPP file format) is a multimedia container format defined by the Third Generation Partnership


Project (3GPP) for 3G UMTSmultimedia services. It is used on 3G mobile phones but can also be
played on some 2G and 4G phones. he 3GP file format stores video streams as MPEG-4 Part
2 or H.263 or MPEG-4 Part 10 (AVC/H.264), and audio streams as AMR-NB,AMR-WB, AMR-
WB+, AAC-LC, HE-AAC v1 or Enhanced aacPlus (HE-AAC v2). 3GPP allowed use of AMR and
H.263 codecs in the ISO base media file format (MPEG-4 Part 12), because 3GPP specified the
usage of the Sample Entry and template fields in the ISO base media file format as well as defining
new boxes to which codecs refer.

ivX is a brand name of products created by DivX, Inc. (formerly DivXNetworks, Inc.), including the
DivX Codec which has become popular due to its ability to compress lengthy video segments into
small sizes while maintaining relatively high visual quality.

There are two DivX codecs; the regular MPEG-4 Part 2 DivX codec and the H.264/MPEG-4
AVC DivX Plus HD codec. It is one of several codecs commonly associated with "ripping",
whereby audio and video multimedia are transferred to a hard disk and transcoded.

This media container format is used for the MPEG-4 Part 2 codec.

 DivX Media Format (DMF) features:


 Interactive video menus
 Multiple subtitles (XSUB)
 Multiple audio tracks
 Multiple video streams (for special features like bonus/extra content, just like on DVD-
Video movies)
 Chapter points
 Other metadata (XTAG)
 Multiple format
 Partial backwards compatibility with AVI

Flash Video is a container file format used to deliver video over the Internet using Adobe Flash


Player versions 6–10. Flash Video content may also be embedded within SWF files. There are two
different video file formats known as Flash Video: FLV and F4V. The audio and video data within FLV
files are encoded in the same way as they are within SWF files. The latter F4V file format is based on
the ISO base media file format and is supported starting with Flash Player 9 update 3. [1][2] Both
formats are supported in Adobe Flash Player and currently developed by Adobe Systems. FLV was
originally developed by Macromedia. Flash Video FLV files usually contain material encoded
with codecs following the Sorenson Spark or VP6 video compression formats. The most recent public
releases of Flash Player (collaboration between Adobe Systems and MainConcept) also
support H.264 video and HE-AAC audio.[4] All of these codecs are currently restricted by patents.

M4V is a file container format used by Apple's iTunes application. The M4V file format is a video file
format developed by Apple and is very close to MP4 format. The differences are the optional Apple's
DRM copyright protection, and the treatment of AC3 (Dolby Digital) audio which is not standardized
for MP4 container.

Apple uses M4V files to encode TV episodes, movies, and music videos in the iTunes Store. The
copyright of M4V files may be protected by using Apple's FairPlay DRM copyright protection. To play
a protected M4V file, the computer needs to be authorized (using iTunes) with the account that was
used to purchase the video. However, unprotected M4V files without AC3 audio may be recognized
and played by other video players by changing the file extension from ‘.m4v’ to ‘.mp4’.

You might also like