Planet Combined Imagery Product Specs
Planet Combined Imagery Product Specs
Planet Combined Imagery Product Specs
Reykjavik, Iceland
1. OVERVIEW OF DOCUMENT 11
Figure 3: PlanetScope Analytic Ortho Tiles with RGB (left) and NIR False-Color Composite (right) 27
Table 11: Skysat Analytic Ortho Scene ESUN values, resampled from Thuillier irradiance spectra 45
6.1 LANDSAT 8 60
6.2 SENTINEL-2 61
7. PRODUCT PROCESSING 62
8. PRODUCT METADATA 68
8.1.1 PlanetScope 69
Table 7-B: PlanetScope Ortho Tile Surface Reflectance GeoTIFF Metadata Schema 71
8.1.2 RapidEye 73
Table 7-F: PlanetScope Ortho Scene Surface Reflectance GeoTIFF Metadata Schema 79
8.2.2 SkySat 81
8.3.1 PlanetScope 82
8.3.2 RapidEye 84
8.3.3 SkySat 85
8.4.1 SkySat 86
9. PRODUCT DELIVERY 88
Figure B-2: Layout of Tile Grid within a single UTM zone 101
Figure B-3: Illustration of grid layout of Rows and Columns for a single UTM Zone 102
No part of this document may be reproduced in any form or any means without the prior written consent of Planet.
Unauthorized possession or use of this material or disclosure of the proprietary information without the prior written consent
of Planet may result in legal action. If you are not the intended recipient of this report, you are hereby notified that the use,
circulation, quoting, or reproducing of this report is strictly prohibited and may be unlawful.
The following list defines terms used to describe Planet’s satellite imagery products.
Alpha Mask
An alpha mask is an image channel with binary values that can be used to render areas of the image product
transparent where no data is available.
Atmospheric Correction
The process of correcting at-sensor radiance imagery to account for effects related to the intervening
atmosphere between the earth’s surface and the satellite. Atmospheric correction has been shown to
significantly improve the accuracy of image classification.
Blackfill
Non-imaged pixels or pixels outside of the buffered area of interest that are set to black. They may appear as
pixels with a value of “0” or as “noData” depending on the viewing software.
GeoJSON
A standard for encoding geospatial data using JSON (see JSON below).
GeoTIFF
An image format with geospatial metadata suitable for use in a GIS or other remote sensing software.
Landsat 8
Freely available dataset offered through NASA and the United States Geological Survey.
Metadata
Data delivered with Planet’s imagery products that describes the products content and context and can be
used to conduct analysis or further processing.
Near-Infrared (NIR)
Near Infrared is a region of the electromagnetic spectrum.
Orthorectification
The process of removing and correcting geometric image distortions introduced by satellite collection
geometry, pointing error, and terrain variability.
Ortho Tile
Ortho Tiles are Planet’s core product lines of high-resolution satellite images. Ortho tiles are available in two
different product formats: Visual and Analytic, each offered in GeoTIFF format.
PlanetScope
The first three generations of Planet’s optical systems are referred to as PlanetScope 0, PlanetScope 1, and
PlanetScope 2.
Radiometric Correction
The correction of variations in data that are not caused by the object or image being scanned. These include
correction for relative radiometric response between detectors, filling non-responsive detectors and scanner
inconsistencies.
Reflectance Coefficient
The reflectance coefficient provided in the metadata is used as a multiplicative to convert Analytic TOA
Radiance values to TOA Reflectance.
RapidEye
RapidEye refers to the five-satellite constellation operating between 2009 and 2020.
Scene
A single image captured by a PlanetScope satellite.
Sensor Correction
The correction of variations in the data that are caused by sensor geometry, attitude and ephemeris.
Sentinel-2
Copernicus Sentinel-2 is a multispectral imaging satellite constellation operated by the European Space Agency.
SkySat
SkySat refers to the 15-satellite constellation in operation since 2014.
Sun Azimuth
The angle of the sun as seen by an observer located at the target point, as measured in a clockwise direction
from the North.
Sun Elevation
The angle of the sun above the horizon.
The unusable data mask is a raster image having the same dimensions as the image product, indicating on a
pixel-by-pixel basis which pixels are unusable because they are cloud filled, outside of the observed area and
therefore blackfilled, or the pixel value is missing or suspect (due to saturation, blooming, hot pixels, dust,
sensor damage, etc). The unusable data mask is an 8-bit image, where each pixel contains a bit pattern
indicating conditions applying to the imagery pixel. A value of zero indicates a "good" imagery pixel.
● Bit 0: Black fill - Identifies whether the area contains blackfill in all bands (this area was not imaged by
the spacecraft). A value of “1” indicates blackfill.
● Bit 1: Cloud - This pixel is assessed to likely be an opaque cloud.
● Bit 2: Blue is missing or suspect.
● Bit 3: Green is missing or suspect.
● Bit 4: Red is missing or suspect.
● Bit 5: Red Edge is missing or suspect (Rapideye only).
● Bit 6: NIR is missing or suspect
● Bit 7: Unused
The usable data mask is a raster image having the same dimensions as the image product, comprised of 8
bands, where each band represents a specific usability class mask. The usability masks are mutually exclusive,
and a value of one indicates that the pixel is assigned to that usability class.
● Band 1: clear mask (a value of “1” indicates the pixel is clear, a value of “0” indicates that the pixel is not
clear and is one of the 5 remaining classes below)
● Band 2: snow mask
● Band 3: shadow mask
● Band 4: light haze mask
● Band 5: heavy haze mask
● Band 6: cloud mask
● Band 7: confidence map (a value of “0” indicates a low confidence in the assigned classification, a value
of “100” indicates a high confidence in the assigned classification)
Planet uses an agile aerospace approach for the design of its satellites, mission control, and operations systems;
and the development of its web-based platform for imagery processing and delivery. Planet employs an “always
on” image capturing method as opposed to the traditional tasking model used by most satellite companies
today.
Planet operates the PlanetScope (PS) and SkySat (SS) Earth-imaging constellations. Imagery is collected and
processed in a variety of formats to serve different use cases, be it mapping, deep learning, disaster response,
precision agriculture, or simple temporal image analytics to create rich information products.
PlanetScope satellite imagery is captured as a continuous strip of single frame images known as “scenes.”
Scenes are derived from multiple generations of PlanetScope satellites. Older-generation of PlanetScope
satellites acquired a single RGB (red, green, blue) frame or a split-frame with a RGB half and a NIR
(near-infrared) half, depending on the capability of the satellite. The new generation of PlanetScope satellites
(PS2.SD and PSB.SD) acquire images with a multistripe frame with bands divided between RGBNIR (PS2.SD) or
RGBNIR, green I, yellow and coastal blue (PSB.SD).
Planet offers three product lines for PlanetScope imagery: a Basic Scene product, an Ortho Scene product, and
an Ortho Tile product. The Basic Scene product is a scaled Top of Atmosphere Radiance (at sensor) and
sensor-corrected product. The Basic Scene product is designed for users with advanced image processing and
geometric correction capabilities. The product is not orthorectified or corrected for terrain distortions. Ortho
Scenes represent the single-frame image captures as acquired by a PlanetScope satellite with additional post
processing applied. Ortho Tiles are multiple orthorectified scenes in a single strip that have been merged and
then divided according to a defined grid.
SkySat imagery is captured similar to PlanetScope in a continuous strip of single frame images known as
“scenes,” which are all acquired in the blue, green, red, nir-infrared, and panchromatic bands. SkySat data is
available in four product lines: the Basic Scene, Ortho Scene, Basemap, and SkySat Collect products.
The PlanetScope satellite constellation consists of multiple launches of groups of individual satellites. Therefore,
on-orbit capacity is constantly improving in capability or quantity, with technology improvements deployed at a
rapid pace.
Each PlanetScope satellite is a CubeSat 3U form factor (10 cm by 10 cm by 30 cm). The complete PlanetScope
constellation of approximately 130 satellites is able to image the entire land surface of the Earth every day
(equating to a daily collection capacity of 200 million km²/day). This capacity changes based on the number of
satellites in orbit and throughout the season, as satellites image less in the northern hemisphere in the winter
time because of a decrease in the amount of hours with sunlight.
PlanetScope satellites launched starting in November 2018 have sensor characteristics that enable improved
spectral resolution. The second generation of PlanetScope satellites (known as Dove-R or PS2.SD) have a sensor
plane consisting of four separate stripes organized vertically along the track of the flight path. PlaneScope
images from PS2.SD satellites are available starting from March, 2019 (sparsely) to April 22, 2022.
A third generation of PlanetScope sensors (known as SuperDove or PSB.SD) is currently in orbit and is
producing daily imagery with 8 spectral bands (coastal blue, blue, green I, green, red, yellow, red edge and
near-infrared). These satellites were launched in early 2020 and started producing imagery in mid-March 2020.
PSB.SD PlanetScope satellites reached near daily cadence in August 2021. Starting on April 29, 2022 all new
PlanetScope images have 8-bands and be from the PSB.SD satellites (SuperDoves). The 8-Band PlanetScope
images can be obtained using all Planet Platforms, Integrations and API. The item-type is PSScene.
Composite images with the second and third generation PlanetScope sensors are produced by an image
registration process involving multiple frames ahead and behind an anchor frame. The band alignment is
dependent on ground-lock in the anchor frame and will vary with scene content. For example, publication yield
is expected to be lower in scenes over open water, mountainous terrain, or cloudy areas.
The band alignment threshold is based on across-track registration residuals, currently set to 0.3 pixels for
“standard” PlanetScope products (instruments PS2.SD and PSB.SD), 0.5 pixels to qualify for “test.” Whether a
PlanetScope image is classified as “standard” or “test” can be determined by looking at image GeoJSON
metadata property “quality_category.”
Sensor Type Four-band frame Imager Four-band frame imager Eight-band frame imager
with a split-frame VIS+NIR with butcher-block filter with butcher-block filter
filter providing blue, green, red, providing coastal blue,
and NIR stripes blue, green I, green, yellow,
red, red-edge, and NIR
stripes
Spectral Bands Blue: 455 - 515 nm Blue: 464 - 517 nm Coastal Blue 431-452 nm
Green: 500 - 590 nm Green: 547 - 585 nm Blue: 465-515 nm
Red: 590 - 670 nm Red: 650 - 682 nm Green I: 513. - 549 nm
NIR: 780 - 860 nm NIR: 846 - 888 nm Green: 547. - 583 nm
Yellow: 600-620 nm
Red: 650 - 680 nm
Red-Edge: 697 - 713 nm
NIR: 845 - 885 nm
Ground Sample Distance 3.0 m-4.1 m (approximate, altitude dependent) 3.7 m-4.2 m (approximate,
(nadir) altitude dependent)
Availability Date July 2014 - April 2022 March 2019 - April 2022 March 2020 - present
The SkySat-C generation satellite is a high-resolution Earth imaging satellite, first launched in 2016. Fourteen are
currently in orbit, all collecting thousands of sq km of imagery. Each satellite is 3-axis stabilized and agile
All SkySats contain Cassegrain telescopes with a focal length of 3.6m, with three 5.5 megapixel CMOS imaging
detectors making up the focal plane.
Attribute Value
Mass 110 kg
Dimensions 60 x 60 x 95 cm
SKYSAT POINTING
Attribute Value
[SkySat-3 - SkySat-15]
Panchromatic: 0.65m
Multispectral: 0.81m
[SkySat-16 - SkySat-21]
Panchromatic: 0.58m
Multispectral: 0.72m
Panchromatic Sensor
Product Framing SkySat satellites have three cameras per satellite, which capture overlapping strips.
Each of these strips contain overlapping scenes. One scene is approximately 2560 x
1080 pixels
Sensor Type CMOS Frame Camera with Panchromatic and Multispectral halves
The SkySats are currently capable of capturing in the traditional satellite imaging stereo or tri-stereo approach.
Stereo pairs are captured by a single SkySat, in a single pass, symmetrical from nadir with a total convergence
angle between ~27 and 50 degrees. Tri-stereos are captured similarly, with a middle capture collected as close
to nadir as possible, with ~27 degree convergence angle between the first and third collect. Hence the total
convergence angle of a triplet is ~53 degrees between the first and last collect.
The name of each acquired PlanetScope image is designed to be unique and allow for easier recognition and
sorting of the imagery. It includes the date and time of capture, as well as the id of the satellite that captured it.
The name of each downloaded image product is composed of the following elements:
Analytic products are scaled to Top of Atmosphere Radiance. Validation of radiometric accuracy of the on-orbit
calibration has been measured at 5% using vicarious collects in the Railroad Valley calibration site.
To convert the pixel values of the Analytic products to radiance, it is necessary to multiply the DN value by the
radiometric scale factor, as follows:
The resulting value is the at sensor radiance of that pixel in watts per steradian per square meter (W/m²*sr*μm).
To convert the pixel values of the Analytic products to Top of Atmosphere Reflectance, it is necessary to multiply
the DN value by the reflectance coefficient found in the XML file. This makes the complete conversion from DN
to Top of Atmosphere Reflectance to be as follows:
Atmospheric Correction
Surface reflectance is determined from top of atmosphere (TOA) reflectance, calculated using coefficients
supplied with the Planet Radiance product.
The Planet Surface Reflectance product corrects for the effects of the Earth's atmosphere, accounting for the
molecular composition and variation with altitude along with aerosol content. Combining the use of standard
atmospheric models with the use of MODIS water vapor, ozone and aerosol data, this provides reliable and
consistent surface reflectance scenes over Planet's varied constellation of satellites as part of our normal,
on-demand data pipeline. However, there are some limitations to the corrections performed:
● In some instances there is no MODIS data overlapping a Planet scene or the area nearby. In those cases,
AOD is set to a value of 0.226 which corresponds to a “clear sky” visibility of 23km, the aot_quality is set
to the MODIS “no data” value of 127, and aot_status is set to ‘Missing Data - Using Default AOT’. If there
is no overlapping water vapor or ozone data, the correction falls back to a predefined 6SV internal
model.
● The effects of haze and thin cirrus clouds are not corrected for.
● Aerosol type is limited to a single, global model.
● All scenes are assumed to be at sea level and the surfaces are assumed to exhibit Lambertian scattering
- no BRDF effects are accounted for.
● Stray light and adjacency effects are not corrected for.
Planet provides a “harmonization” tool in all Planet platforms to perform a rigorous approximate transform of
the Surface Reflectance measurements of the PS2 instrument PlanetScope satellites to the Surface Reflectance
equivalents from PS2.SD and PSB.SD instrument PlanetScope satellites. This is done by using Sentinel-2 as the
To convert the PS2 instrument PlanetScope Surface Reflectance values to a PSB.SD equivalent measurement,
use the “harmonization” tool. This tool is available in Planet Explorer, ArcGIS Pro Add-In and QGIS Plug-In when
placing an order. Use the “harmonization” tool in the Orders API, Subscriptions API, and Google Earth Engine if
you are downloading data through the API.
Note: The harmonization process only applies to bands with a PS2 equivalent—specifically Blue, Green, Red, and
Near-infrared—and only for Surface Reflectance values.
The PlanetScope Basic Scene product is a Scaled Top of Atmosphere Radiance (at sensor) and sensor corrected
product, providing imagery as seen from the spacecraft without correction for any geometric distortions
inherent in the imaging process. It has a scene based framing, and is not mapped to a cartographic projection.
This product line is available in GeoTIFF and NITF 2.1 formats.
The PlanetScope Basic Scene product is a multispectral analytic data product from the satellite constellation.
This product has not been processed to remove distortions caused by terrain and allows analysts to derive
information products for data science and analytics.
The Basic Scene product is designed for users with advanced image processing capabilities and a desire to
geometrically correct the product themselves. The imagery data is accompanied by Rational Polynomial
Coefficients (RPCs) to enable orthorectification by the user.
The table below describes the attributes for the PlanetScope Basic Scene product:
Product Components and Format The PlanetScope Basic Scene product consists of the following file components:
● Image File – GeoTIFF format
● Metadata File – XML format
● Rational Polynomial Coefficients (RPC) - XML format
● Thumbnail File – GeoTIFF format
● Unusable Data Mask (UDM) File – GeoTIFF format
● Usable Data Mask (UDM2) File - GeoTIFF format
Information Content
Processing
Geometric Corrections Spacecraft-related effects are corrected using attitude telemetry and best available
ephemeris data, and refined using GCPs.
PlanetScope satellites collect imagery as a series of overlapping framed scenes, and these Scene products are
not organized to any particular tiling grid system. The Ortho Scene products enable users to create seamless
imagery by stitching together PlanetScope Ortho Scenes of their choice and clipping it to a tiling grid structure
as required.
The PlanetScope Ortho Scene product is orthorectified and the product was designed for a wide variety of
applications that require imagery with an accurate geolocation and cartographic projection. It has been
processed to remove distortions caused by terrain and can be used for cartographic purposes. The Ortho Scenes
are delivered as visual (RGB) and analytic products. Ortho Scenes are radiometrically-, sensor-, and
geometrically-corrected (optional atmospherically corrected) products that are projected to a cartographic map
projection. The geometric correction uses fine Digital Elevation Models (DEMs) with a post spacing of between
30 and 90 meters.
Ground Control Points (GCPs) are used in the creation of every image and the accuracy of the product will vary
from region to region based on available GCPs. Computer vision algorithms are used for extracting feature
The table below describes the attributes for the PlanetScope Ortho Scene product:
Product Components and Format PlanetScope Ortho Scene product consists of the following file components:
● Image File – GeoTIFF format
● Metadata File – XML format
● Thumbnail File – GeoTIFF format
● Unusable Data Mask (UDM) file – GeoTIFF format
● Usable Data Mask (UDM2) file - GeoTIFF format
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model.
Orthorectification uses GCPs and fine DEMs (30 m to 90 m posting).
Atmospheric Corrections Atmospheric effects are corrected using 6SV2.1 radiative transfer code. AOD, water
vapor and ozone inputs are retrieved from MODIS near-real-time data (MOD09CMA,
MOD09CMG and MOD08-D3).
The PlanetScope Visual Ortho Scene product is orthorectified and color-corrected (using a color curve). This
correction attempts to optimize colors as seen by the human eye providing images as they would look if viewed
from the perspective of the satellite. This product has been processed to remove distortions caused by terrain
and can be used for cartographic mapping and visualization purposes. This correction also eliminates the
The Visual Ortho Scene product is optimal for simple and direct use of an image. It is designed and made
visually appealing for a wide variety of applications that require imagery with an accurate geolocation and
cartographic projection. The product can be used and ingested directly into a Geographic Information System.
Information Content
Processing
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model.
Spacecraft-related effects are corrected using attitude telemetry and best available
ephemeris data. Orthorectified using GCPs and fine DEMs (30 m to 90 m posting) to
<10 m RMSE positional accuracy.
Color Enhancements Enhanced for visual use and corrected for sun angle
The PlanetScope Analytic Ortho Scene product is orthorectified, multispectral data from the satellite
constellation. Analytic products are calibrated multispectral imagery products that have been processed to
allow analysts to derive information products for data science and analytics. This product is designed for a wide
variety of applications that require imagery with an accurate geolocation and cartographic projection. The
product has been processed to remove distortions caused by terrain and can be used for many data science
and analytic applications. It eliminates the perspective effect on the ground (not on buildings), restoring the
geometry of a vertical shot. The PlanetScope Analytic Ortho Scene is optimal for value-added image processing
such as land cover classifications. The imagery has radiometric corrections applied to correct for any sensor
artifacts and transformation to at-sensor radiance.
Information Content
Analytic Bands 3-band multispectral image (red, green, blue) - only available for PS2 images
4-band multispectral image (blue, green, red, near-infrared)
8-band multispectral image (coastal blue, blue, green I, green, red, yellow, red edge
and near-infrared) - only available for PSB.SD images
Processing
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model.
Spacecraft-related effects are corrected using attitude telemetry and best available
ephemeris data. Orthorectified using GCPs and fine DEMs (30 m to 90 m posting) to
<10 m RMSE positional accuracy.
Atmospheric Corrections ● Conversion to top of atmosphere (TOA) reflectance values using at-sensor
radiance and supplied coefficients
● Conversion to surface reflectance values using the 6SV2.1 radiative transfer code
and MODIS NRT data
● Reflectance values scaled by 10,000 to reduce quantization error
The PlanetScope Ortho Tile products offer PlanetScope Satellite imagery orthorectified as individual 25 km by
25 km tiles referenced to a fixed, standard image tile grid system. This product was designed for a wide variety
of applications that require imagery with an accurate geolocation and cartographic projection. It has been
processed to remove distortions caused by terrain and can be used for cartographic purposes.
For PlanetScope split-frame satellites, imagery is collected as a series of overlapping framed scenes from a
single satellite in a single pass. These scenes are subsequently orthorectified and an ortho tile is then generated
from a collection of consecutive scenes, typically 4 to 5. The process of conversion of framed scene to ortho tile is
outlined in the figure below.
The table below describes the attributes for the PlanetScope Ortho Tile product:
Product Components and Format PlanetScope Ortho Tile product consists of the following file components:
● Image File – GeoTIFF format
● Metadata File – XML format
● Thumbnail File – GeoTIFF format
● Unusable Data Mask (UDM) File – GeoTIFF format
● Usable Data Mask (UDM2) File - GeoTIFF format
Product Framing PlanetScope Ortho Tiles are based on a worldwide, fixed UTM grid system. The grid
is defined in 24 km by 24 km tile centers, with 1 km of overlap (each tile has an
additional 500 m overlap with adjacent tiles), resulting in 25 km by 25 km tiles.
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model.
Orthorectified using GCPs and fine DEMs (30 m to 90 m posting).
Atmospheric Corrections Atmospheric effects are corrected using 6SV2.1 radiative transfer code. AOD, water
vapor and ozone inputs are retrieved from MODIS near-real-time data (MOD09CMA
and MOD09CMG).
The PlanetScope Visual Ortho Tile product is orthorectified and color-corrected (using a color curve). This
correction attempts to optimize colors as seen by the human eye providing images as they would look if viewed
from the perspective of the satellite. It has been processed to remove distortions caused by terrain and can be
used for cartographic mapping and visualization purposes. It eliminates the perspective effect on the ground
(not on buildings), restoring the geometry of a vertical shot. Additionally, a correction is made to the sun angle
in each image to account for differences in latitude and time of acquisition.
The Visual product is optimal for simple and direct use of the image. It is designed and made visually appealing
for a wide variety of applications that require imagery with an accurate geolocation and cartographic projection.
The product can be used and ingested directly into a Geographic Information System.
Information Content
Processing
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model,
bands are co-registered, and spacecraft-related effects are corrected using attitude
telemetry and best available ephemeris data. Orthorectified using GCPs and fine
DEMs (30 m to 90 m posting) to < 10 m RMSE positional accuracy.
Color Enhancements Enhanced for visual use and corrected for sun angle
The PlanetScope Analytic Ortho Tile product is orthorectified, multispectral data from the satellite constellation.
Analytic products are calibrated multispectral imagery products that have been processed to allow analysts to
derive information products for data science and analytics. This product is designed for a wide variety of
applications that require imagery with an accurate geolocation and cartographic projection. It has been
processed to remove distortions caused by terrain and can be used for many data science and analytic
applications. It eliminates the perspective effect on the ground (not on buildings), restoring the geometry of a
vertical shot. The orthorectified visual imagery is optimal for value-added image processing including
vegetation indices, land cover classifications, etc. In addition to orthorectification, the imagery has radiometric
corrections applied to correct for any sensor artifacts and transformation to scaled at-sensor radiance.
Figure 2: PlanetScope Analytic Ortho Tiles with RGB (left) and NIR False-Color Composite (right)
Information Content
Processing
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model,
bands are co-registered, and spacecraft-related effects are corrected using attitude
telemetry and best available ephemeris data. Orthorectified using GCPs and fine
DEMs (30 m to 90 m posting) to <10 m RMSE positional accuracy.
Atmospheric Corrections ● Conversion to top of atmosphere (TOA) reflectance values using at-sensor
radiance and supplied coefficients
● Conversion to surface reflectance values using the 6SV2.1 radiative transfer code
and MODIS NRT data
● Reflectance values scaled by 10,000 to reduce quantization error
The PlanetScope Analytic 5B Ortho Tile product is identical to the Analytic Ortho Tile above except with the
PlanetScope red-edge band included.
Information Content
Analytic Bands 5-band multispectral image (blue, green, red, red-edge, near-infrared)
Processing
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model,
bands are co-registered, and spacecraft-related effects are corrected using attitude
telemetry and best available ephemeris data. Orthorectified using GCPs and fine
DEMs (30 m to 90 m posting) to <10 m RMSE positional accuracy.
The name of each acquired RapidEye image is designed to be unique and allow for easier recognition and
sorting of the imagery. It includes the date and time of capture, as well as the id of the satellite that captured it.
The name of each downloaded image product is composed of the following elements:
<tileid>_<acquisition_date>_<satellite_id>_<productLevel>_<productType>.<extension>
<acquisition_date>T<acquisition_time>_<satellite_id>_<productLevel>_<productType>.<extension>
Analytic products are scaled to Top of Atmosphere Radiance. Validation of radiometric accuracy of the on-orbit
calibration has been measured at 5% using vicarious collects in the Railroad Valley calibration site. Furthermore,
each band is maintained within a range of +/- 2.5% from the band mean value across the constellation and over
the satellite’s lifetime.
All RapidEye satellite images were collected at a bit depth of 12 bits and on-board the satellites, the least
significant bit is removed, and thus 11 bits are stored and downloaded. On the ground, the bit shift is reversed by
a multiplication factor of 2. The bit depth of the original raw imagery can be determined from the “shifting”
field in the XML metadata file. During on-ground processing, radiometric corrections are applied and all images
are scaled to a 16-bit dynamic range. This scaling converts the (relative) pixel DNs coming directly from the
sensor into values directly related to absolute at sensor radiances. The scaling factor is applied so that the
resultant single DN values correspond to 1/100th of a W/(m²*sr*μm). The DNs of the RapidEye image pixels
represent the absolute calibrated radiance values for the image.
To convert the pixel values of the Analytic products to radiance, it is necessary to multiply the DN value by the
radiometric scale factor, as follows:
The resulting value is the at-sensor radiance of that pixel in watts per steradian per square meter (W/m²*sr*μm).
Reflectance is generally the ratio of the reflected radiance divided by the incoming radiance. Note that this ratio
has a directional aspect. To turn radiance into reflectance it is necessary to relate the radiance values (e.g. the
pixel DNs multiplied with the radiometric scale factor) to the radiance the object is illuminated with. This is
often done by applying an atmospheric correction software to the image, because this way the impact of the
atmosphere to the radiance values is eliminated at the same time. But it would also be possible to neglect the
influence of the atmosphere by calculating the Top Of Atmosphere (TOA) reflectance taking into consideration
only the sun distance and the geometry of the incoming solar radiation. The formula to calculate the TOA
reflectance not taking into account any atmospheric influence is as follows:
with:
For RapidEye, the EAI values for the 5 bands are (based on the “New Kurucz 2005” model):
Atmospheric Correction
Surface reflectance is determined from top of atmosphere (TOA) reflectance, calculated using coefficients
supplied with the Planet Radiance product.
The Planet Surface Reflectance product corrects for the effects of the Earth's atmosphere, accounting for the
molecular composition and variation with altitude along with aerosol content. Combining the use of standard
atmospheric models with the use of MODIS water vapor, ozone, and aerosol data, this provides reliable and
● In some instances there is no MODIS data overlapping a Planet scene or the area nearby. In those cases,
AOD is set to a value of 0.226 which corresponds to a “clear sky” visibility of 23km, the aot_quality is set
to the MODIS “no data” value of 127, and aot_status is set to ‘Missing Data - Using Default AOT’. If there
is no overlapping water vapor or ozone data, the correction falls back to a predefined 6SV internal
model.
● The effects of haze and thin cirrus clouds are not corrected for.
● Aerosol type is limited to a single, global model.
● All scenes are assumed to be at sea level and the surfaces are assumed to exhibit Lambertian scattering
- no BRDF effects are accounted for.
● Stray light and adjacency effects are not corrected for.
The RapidEye Basic product is the least processed of the available RapidEye imagery products. This product is
designed for customers with advanced image processing capabilities and a desire to geometrically correct the
product themselves. This product line is available in GeoTIFF and NITF formats.
The RapidEye Basic Scene product is radiometrically- and sensor-corrected, providing imagery as seen from the
spacecraft without correction for any geometric distortions inherent in the imaging process, and is not mapped
to a cartographic projection. The imagery data is accompanied by all spacecraft telemetry necessary for the
processing of the data into a geo-corrected form, or when matched with a stereo pair, for the generation of
digital elevation data. Resolution of the images is 6.5 meters GSD at nadir. The images are resampled to a
coordinate system defined by an idealized basic camera model for band alignment.
● Internal detector geometry which combines the two sensor chipsets into a virtual array
● Optical distortions caused by sensor optics
● Registration of all bands together to ensure all bands line up with each other correctly
The table below lists the product attributes for the RapidEye Basic Scene product.
Product Components and Format RapidEye Basic Scene product consists of the following file components:
Product Framing
Geometric Corrections Idealized sensor, orbit and attitude models. Bands are co-registered.
The RapidEye Ortho Tile products are orthorectified as individual 25 km by 25 km tiles. This product was
designed for a wide variety of applications that require imagery with an accurate geolocation and cartographic
The RapidEye Ortho Tile products are radiometrically-, sensor- and geometrically-corrected and aligned to a
cartographic map projection. The geometric correction uses fine DEMs with a post spacing of between 30 and
90 meters. GCPs are used in the creation of every image and the accuracy of the product will vary from region
to region based on available GCPs. RapidEye Ortho Tile products are output as 25 km by 25 km tiles referenced
to a fixed, standard RapidEye image tile grid system.
The table below lists the product attributes for the RapidEye Ortho Tile product.
Product Components and Format RapidEye Ortho Tile product consists of the following file components:
● Image File – GeoTIFF file that contains image data and geolocation information
● Metadata File – XML format metadata file and GeoJSON metadata available
● Unusable Data Mask (UDM) File – GeoTIFF format
Product Framing RapidEye Ortho Tiles are based on a worldwide, fixed UTM grid system. The grid is
defined in 24 km by 24 km tile centers, with 1 km of overlap (each tile has an
additional 500 m overlap with adjacent tiles), resulting in 25 km by 25 km tiles.
Product Size Tile size is 25 km (5000 lines) by 25 km (5000 columns). 250 Mbytes per Tile for 5
bands at 5 m pixel size after orthorectification.
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model,
bands are co-registered, and spacecraft-related effects are corrected using attitude
telemetry and best available ephemeris data. Orthorectified using GCPs and fine
DEMs (30 m to 90 m posting).
The RapidEye Visual Ortho Tile product is orthorectified and color-corrected (using a color curve). This
correction optimizes colors as seen by the human eye, providing images as they would look if viewed from the
perspective of the satellite. It has been processed to remove distortions caused by terrain and can be used for
cartographic mapping and visualization purposes. It eliminates the perspective effect on the ground (not on
The visual product is optimal for simple and direct use of the image. It is designed and made visually appealing
for a wide variety of applications that require imagery with an accurate geolocation and cartographic projection.
The product can be used and ingested directly into a Geographic Information System.
Information Content
Processing
Radiometric Corrections ● Correction of relative differences of the radiometric response between detectors.
● Non-responsive detector filling which fills nulls values from detectors that are no
longer responding.
● Conversion to absolute radiometric values based on calibration coefficients.
Color Enhancements Enhanced for visual use and corrected for sun angle
The RapidEye Analytic Ortho Tile product is orthorectified, multispectral data. This product is designed for a
wide variety of applications that require imagery with an accurate geolocation and cartographic projection. It
has been processed to remove distortions caused by terrain and can be used for many data science and analytic
applications. It eliminates the perspective effect on the ground (not on buildings), restoring the geometry of a
vertical shot. The orthorectified imagery is optimal for value-added image processing including vegetation
indices, land cover classifications, etc. In addition to orthorectification, the imagery has radiometric corrections
applied to correct for any sensor artifacts and transformation to at-sensor radiance.
Information Content
Analytic Bands 5-band multispectral image (blue, green, red, red edge, near-infrared)
Processing
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model, ban
are co-registered, and spacecraft-related effects are corrected using attitude
telemetry and best available ephemeris data. Orthorectified using GCPs and fine
DEMs (30 m to 90 m posting) to
< 10 m RMSE positional accuracy.
Radiometric Corrections ● Correction of relative differences of the radiometric response between detectors.
● Non-responsive detector filling which fills null values from detectors that are no
longer responding.
● Conversion to absolute radiometric values based on calibration coefficients.
Atmospheric Corrections ● Conversion to top of atmosphere (TOA) reflectance values using at-sensor
radiance and supplied coefficients
● Conversion to surface reflectance values using the 6SV2.1 radiative transfer code
and MODIS NRT data
● Reflectance values scaled by 10,000 to reduce quantization error
The SkySat Basic Scene product includes Analytic, Analytic DN, L1A Panchromatic DN, and Panchromatic
imagery that is uncalibrated and in a raw digital number format. The Basic Scene Product is not corrected for
any geometric distortions inherent in the imaging process.
Imagery data is accompanied by Rational Polynomial Coefficients (RPCs) to enable orthorectification by the
user. This product is designed for users with advanced image processing capabilities and a desire to
geometrically correct the product themselves.
The SkySat Basic Scene Product has a sensor-based framing, and is not mapped to a cartographic projection.
Information Content
Sensor Type CMOS Frame Camera with Panchromatic and Multispectral halves
Product Bit Depth 16-bit Unsigned Integer Multispectral and Panchromatic Imagery
Geometric Corrections Idealized sensor model and Rational Polynomial Coefficients (RPC)
Bands are co-registered
[SkySat-3 - SkySat-15]
Panchromatic: 0.65m
Multispectral: 0.81m
[SkySat-16 - SkySat-21]
Panchromatic: 0.58m
Multispectral: 0.72m
Full motion videos are collected between 30 and 120 seconds by a single camera from any of the SkySats. Videos
are collected using the panchromatic half of the camera, hence all videos are PAN only.
Videos are packaged and delivered with a video mpeg-4 file, plus all image frames with accompanying video
metadata and a frame index file (reference Product Types below).
Information Content
Sensor Type CMOS Frame Camera with Panchromatic and Multispectral halves
Geometric Corrections Idealized sensor model and Rational Polynomial Coefficients (RPC)
The SkySats capture up to 50 frames per second per Collect. The All-frames asset includes all of the originally
captured frames in a Collect, uncalibrated and in a raw digital number format. Delivered as a zip file containing
all frames as basic L1A panchromatic DN imagery files, with accompanying RPC txt files, and a JSON pinhole
camera model.
Information Content
Sensor Type CMOS Frame Camera with Panchromatic and Multispectral halves
Geometric Corrections Idealized sensor model and Rational Polynomial Coefficients (RPC)
[SkySat-16 - SkySat-21]
Described here is the JSON pinhole model that accompanies each all-frames asset. The pinhole model is based
on projective matrices, omitting the optical distortion model. As built, the SkySat telescopes have ~1 pixel or less
of distortion across all three sensors.
Projective Model
Let be a position in imaging plane coordinates, with values in pixels (or fractional pixels), in 2D
homogeneous coordinates.
that 𝑖𝑚 = 𝑃 𝑐𝑎𝑚𝑒𝑟𝑎
𝑃 𝑖𝑛𝑡𝑟𝑖𝑛𝑠𝑖𝑐
𝑃 𝑒𝑥𝑡𝑟𝑖𝑛𝑠𝑖𝑐
𝑋 𝐸𝐶𝐸𝐹
3𝑥4
For efficiency, we can also combine all three components into a single projective matrix, 𝑃 𝑝𝑟𝑜𝑗𝑒𝑐𝑡𝑖𝑣𝑒
∈𝑅 such
that 𝑃 𝑝𝑟𝑜𝑗𝑒𝑐𝑡𝑖𝑣𝑒
= 𝑃 𝑐𝑎𝑚𝑒𝑟𝑎
𝑃 𝑖𝑛𝑡𝑟𝑖𝑛𝑠𝑖𝑐
𝑃 𝑒𝑥𝑡𝑟𝑖𝑛𝑠𝑖𝑐
A given value of im describes a projective ray in the pinhole camera frame, representing the projection of 𝑋 𝐸𝐶𝐸𝐹
onto the camera sensor. Note that w=0 indicates a ray parallel to the imaging plane and will never intersect the
sensor. For w≠0, we can simply solve for u and v.
Exterior Orientation
Let describe the satellite position at a particular time, in ECEF coordinates and with values
in meters.
𝑃 𝑒𝑥𝑡𝑟𝑖𝑛𝑠𝑖𝑐
is constructed from the exterior orientation by translating the origin to the satellite position and
applying the ECEF-to-boresight rotation (following the conventions in
https://en.wikipedia.org/wiki/Conversion_between_quaternions_and_Euler_angles#Rotation_matrices):
Interior Orientation
𝑃 𝑖𝑛𝑡𝑟𝑖𝑛𝑠𝑖𝑐
and 𝑃 𝑐𝑎𝑚𝑒𝑟𝑎
are based on the rigorous model from "SkySat Imaging Geometry." Their derivation involves
multiple frame changes and axis flips and is not described here. We expect that these will remain nearly
constant over time for each satellite and camera. 𝑃 𝑒𝑥𝑡𝑟𝑖𝑛𝑠𝑖𝑐 is unique to each satellite and imaging time, but
shared across cameras for each capture event.
To convert the pixel values of the Analytic products to radiance, it is necessary to multiply the DN value by the
radiometric scale factor, as follows:
The resulting value is the Top of Atmosphere Radiance of that pixel in watts per steradian per square meter
(W/m²*sr*μm).
To convert the pixel values of the Analytic products to Top of Atmosphere Reflectance, it is necessary to multiply
the DN value by the reflectance coefficient found in the GeoTiff header. This makes the complete conversion
from DN to Top of Atmosphere Reflectance to be as follows:
Alternatively, the customer may perform the TOA Reflectance conversion on their own using the following
equation, with the ESUN values given below in Table 3.
Table 4-D: Skysat Analytic Ortho Scene ESUN values, resampled from Thuillier irradiance spectra
camera_id The specific detector used to capture the String (e.g. “d1”, “d2”)
scene.
item_type The name of the item type that models string (e.g. “PSScene3Band”, ”SkySatScene”)
shared imagery data schema.
publishing_stage Stage of publishing for an item. Both "l1a" string (“preview”, “finalized”)
assets and SkySatScenes with
fast-rectification applied will have a
publishing_stage = "preview".
Fast-rectification refers to the initial
rectification of the orthorectified product, to
enable faster publication. Once
full-rectification is applied, all assets will be
updated to publishing_stage = "finalized
quality_category Metric for image quality. To qualify for string (“standard”, “test”)
“standard” image quality an image must
meet a variety of quality standards, for
example: PAN motion blur less than 1.15
pixels, compression bits per pixel less than 3.
If the image does not meet these criteria it
is considered “test” quality.
satellite_azimuth Angle from true north to the satellite vector number (0 - 360)
at the time of imaging, projected on the
horizontal plane in degrees.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
camera_id The specific detector used to capture the String (e.g. “d1”, “d2”)
scene.
item_type The name of the item type that models string (e.g. “PSScene3Band”, ”SkySatScene”)
shared imagery data schema.
quality_category Metric for image quality. To qualify for string (“standard”, “test”)
“standard” image quality an image must
meet a variety of quality standards, for
example: PAN motion blur less than 1.15
pixels, compression bits per pixel less than 3.
If the image does not meet these criteria it
is considered “test” quality.
satellite_azimuth Angle from true north to the satellite vector number (0 - 360)
at the time of imaging, projected on the
horizontal plane in degrees.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
Geometry Composite of the geospatial extent of all frames in the video GeoJson Polygon
Angle
Azimuth
Delta Difference between start and end satellite azimuth angle 94.809043016795172
Exposure
Compression Ratio The ratio comparing an image's true size to its size on the file system 4.0
Scan Rate Kms The ground speed at which the SkySat captures image frames 433.59375
The SkySat Ortho Scene product includes Visual, Analytic DN, Analytic, Panchromatic, and Pansharpened
Multispectral imagery. The Ortho Scene product is sensor- and geometrically-corrected, and is projected to a
cartographic map projection. The geometric correction uses fine Digital Elevation Models (DEMs) with a post
spacing of between 30 and 90 meters.
Ground Control Points (GCPs) are used in the creation of every image and the accuracy of the product will vary
from region to region based on available GCPs. Also note, ortho accuracy is not guaranteed for scenes with a
view angle greater than 30 degrees, captured above +/-85 degrees latitude, with low solar angles, varying
terrain, or with a large concentration of clouds, snow, or water within the scene or full collect.
Additionally, publication is not guaranteed for collections with very large view angles (i.e. greater than 45
degrees) and very low solar elevation (i.e. lower than 20 degrees).
● Visual - orthorectified, pansharpened, and color-corrected (using a color curve) 3-band RGB Imagery
● Pansharpened Multispectral - orthorectified, pansharpened 4-band BGRN Imagery
● Analytic SR - orthorectified, multispectral BGRN. Atmospherically corrected Surface Reflectance
product.
● Analytic - orthorectified, multispectral BGRN. Radiometric corrections applied to correct for any sensor
artifacts and transformation to top-of-atmosphere radiance
● Analytic DN - orthorectified, multispectral BGRN, uncalibrated digital number imagery product
Radiometric corrections applied to correct for any sensor artifacts
● Panchromatic - orthorectified, radiometrically correct, panchromatic (PAN)
● Panchromatic DN - orthorectified, panchromatic (PAN), uncalibrated digital number imagery product
Information Content
Sensor Type CMOS Frame Camera with Panchromatic and Multispectral halves
Processing
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model.
Orthorectification uses GCPs and fine DEMs (30 m to 90 m posting).
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model.
Orthorectification uses GCPs and fine DEMs (30m to 90m posting).
Atmospheric Correction
Surface reflectance is determined from top of atmosphere (TOA) reflectance, calculated using coefficients
supplied with the Planet Radiance product.
The Planet Surface Reflectance product corrects for the effects of the Earth's atmosphere, accounting for the
molecular composition and variation with altitude along with aerosol content. Combining the use of standard
atmospheric models with the use of MODIS water vapor, ozone and aerosol data, this provides reliable and
consistent surface reflectance scenes over Planet's varied constellation of satellites as part of our normal,
on-demand data pipeline. However, there are some limitations to the corrections performed:
● In some instances there is no MODIS data overlapping a Planet scene or the area nearby. In those cases,
AOD is set to a value of 0.226 which corresponds to a “clear sky” visibility of 23km, the aot_quality is set
to the MODIS “no data” value of 127, and aot_status is set to ‘Missing Data - Using Default AOT’. If there
is no overlapping water vapor or ozone data, the correction falls back to a predefined 6SV internal
model.
● The effects of haze and thin cirrus clouds are not corrected for.
● Aerosol type is limited to a single, global model.
● All scenes are assumed to be at sea level and the surfaces are assumed to exhibit Lambertian scattering
- no BRDF effects are accounted for.
camera_id The specific detector used to capture the String (e.g. “d1”, “d2”)
scene.
item_type The name of the item type that models string (e.g. “PSScene3Band”, ”SkySatScene”)
shared imagery data schema.
publishing_stage Stage of publishing for an item. Both "l1a" string (“preview”, “finalized”)
assets and SkySatScenes with
fast-rectification applied will have a
publishing_stage = "preview".
Fast-rectification refers to the initial
rectification of the orthorectified product, to
enable faster publication. Once
full-rectification is applied, all assets will be
updated to publishing_stage = "finalized
quality_category Metric for image quality. To qualify for string (“standard”, “test”)
“standard” image quality an image must
meet a variety of quality standards, for
example: PAN motion blur less than 1.15
pixels, compression bits per pixel less than 3.
If the image does not meet these criteria it
is considered “test” quality.
satellite_azimuth Angle from true north to the satellite vector number (0 - 360)
at the time of imaging, projected on the
horizontal plane in degrees.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
The Ortho Collect product is created by composing SkySat Ortho Scenes along an imaging strip into segments
typically unifying ~60 SkySat Ortho Scenes. The product may contain artifacts resulting from the composing
process, particular offsets in areas of stitched source scenes. In a next version artifacts caused by scene
misalignment will be hidden by cutlines. This is particularly important for the appearance of objects in built-up
areas and their accurate extraction.
Attribute Description
SkySat Satellites have three cameras per satellite, which capture overlapping strips. Each of
these strips contain overlapping scenes. One Collect product composes up to 60 scenes (up
to 20 per camera) and is approximately 20km x 5.9km.
Geometric Corrections Sensor-related effects are corrected using sensor telemetry and a sensor model.
Orthorectification uses GCPs and fine DEMs (30m to 90m posting).
camera_id The specific detector used to capture the String (e.g. “d1”, “d2”)
scene.
item_type The name of the item type that models string (e.g. “PSScene”, ”SkySatCollect”)
shared imagery data schema.
publishing_stage Stage of publishing for an item. Both "l1a" string (“preview”, “finalized”)
assets and SkySatScenes with
fast-rectification applied will have a
publishing_stage = "preview".
Fast-rectification refers to the initial
rectification of the orthorectified product, to
enable faster publication. Once
full-rectification is applied, all assets will be
updated to publishing_stage = "finalized
quality_category Metric for image quality. To qualify for string (“standard”, “test”)
“standard” image quality an image must
meet a variety of quality standards, for
example: PAN motion blur less than 1.15
pixels, compression bits per pixel less than 3.
If the image does not meet these criteria it
is considered “test” quality.
satellite_azimuth Angle from true north to the satellite vector number (0 - 360)
at the time of imaging, projected on the
horizontal plane in degrees.
All basemaps can be viewed at full resolution within the Planet graphical user interface (up to Zoom Level 18 in
the Web Mercator Projection), giving a resolution of 0.597 m at the Equator. The projection used in Planet
basemaps has been selected to match what is typically used in web mapping applications. The basemap
resolution improves at higher and lower latitudes. The Alpha Mask indicates areas of the quad where there is no
imagery data available.
Attribute Description
Sensors SkySat
Processing Pansharpened. Geometrically aligned. Seam lines are minimized with tonal balancing.
Cutlines to minimize visual breaks
6.1 LANDSAT 8
For detailed characteristics of the Landsat 8 sensor and mission please refer to the official Landsat 8
documentation which can be found here: https://landsat.usgs.gov/landsat-8
Information Content
Analytic Bands
Pan Band 8
Visible, NIR, SWIR Band 1-7 and Band 9 (Coastal/Aerosol, Blue, Green, Red, NIR, SWIR 1, SWIR 2, Cirrus)
Processing
Pan 15 m
TIR 100 m
Bit Depth 12-bit data depth, distributed as 16-bit data for easier processing
Geometric Corrections The Geometric Processing Subsystem (GPS) creates L1 geometrically corrected
imagery (L1G) from L1R products. The geometrically corrected products can be
systematic terrain-corrected (L1Gt) or precision terrain-corrected products (L1T). The
GPS generates a satellite model, prepares a resampling grid, and resamples the
data to create an L1Gt or L1T product. The GPS performs sophisticated satellite
geometric correction to create the image according to the map projection and
orientation specified for the L1 standard product.
Radiometric Corrections ● Converts the brightness of the L0R image pixels to absolute radiance in
preparation for geometric correction.
6.2 SENTINEL-2
For detailed characteristics of the Sentinel-2 sensor and mission please refer to the official Sentinel-2
documentation which can be found here:
https://earth.esa.int/web/sentinel/user-guides/sentinel-2-msi/product-types/level-1c
Information Content
Analytic Bands
Visible, NIR 4 bands at 10 m: blue (490 nm), green (560 nm), red (665 nm) and near infrared (842
nm).
RedEdge and NIR 4 narrow bands for vegetation characterisation (705 nm, 740 nm, 783 nm and 865
nm)
Aerosol, Water Vapor, Cirrus 443 nm for aerosols, 945 for water vapor and 1375 nm for cirrus detection
Processing
Pixel Size
SWIR (2 bands) 20 m
Bit Depth 12
Geometric Corrections ● Resampling on the common geometry grid for registration between the Global
Reference Image (GRI) and the reference band.
● Collection of the tie-points from the two images for registration between the GRI
and the reference band.
MetaData/Data Structure ● Level-1C_Tile_Metadata_File (Tile Metadata): XML main metadata file (DIMAP
mandatory file) containing the requested level of information and referring to all
the product elements describing the tile.
● IMG_DATA: folder containing image data files compressed using the JPEG2000
algorithm, one file per band.
● QI_DATA: folder containing QLQC XML reports of quality checks, mask files and
PVI files.
● Inventory_Metadata.xml: inventory metadata file (mandatory).
● manifest.safe: XML SAFE manifest file (Mandatory)
● rep-info: folder containing the XSD schema provided inside a SAFE Level-0
granule
Several processing steps are applied to PlanetScope imagery products, listed in the table below.
Step Description
Darkfield/Offset Correction Corrects for sensor bias and dark noise. Master offset tables are created by
averaging on-orbit darkfield collects across 5-10 degree temperature bins and
applied to scenes during processing based on the CCD temperature at acquisition
time.
Flat Field Correction Flat fields are collected for each optical instrument prior to launch. These fields are
used to correct image lighting and CCD element effects to match the optimal
response area of the sensor. Flat fields are routinely updated on-orbit during the
satellite lifetime.
Camera Acquisition Parameter Determines a common radiometric response for each image (regardless of exposure
Correction time, number of TDI stages, gain, camera temperature and other camera
parameters).
Absolute Calibration As a last step, the spatially and temporally adjusted datasets are transformed from
digital number values into physical based radiance values (scaled to
W/(m²*str*μm)*100).
Visual Product Processing Presents the imagery as natural color, optimize colors as seen by the human eye.
This process is broken down into 4 steps:
● Flat fielding applied to correct for vignetting.
● Nominalization - Sun angle correction, to account for differences in latitude and
time of acquisition. This makes the imagery appear to look like it was acquired at
the same sun angle by converting the exposure time to the nominal time (noon).
● Two filters applied: an unsharp mask for improving local dynamic range, and a
sharpening filter for accentuating spatial features.
● Custom color curve applied post warping.
The figure below illustrates the processing chain and steps involved to generate each of PlanetScope’s imagery
products.
For RapidEye imagery products, the processing steps are listed in the table below.
Step Description
Flat Field Correction (also referred Correction parameters to achieve the common response of all CCD elements when
to as spatial calibration) exposed to the same amount of light have been collected for each optical
instrument prior to launch. During operations, these corrections are adjusted every
quarter or more frequently on an as-needed basis when effects become visible or
measurable. The corrections are derived using side slither or statistical methods.
This step additionally involves statistical adjustments of the read-out channel gains
and offsets on a per image basis.
Temporal Calibration Corrections are applied so that all RapidEye cameras read the same DN (digital
number) regardless of when the image has been taken in the mission lifetime.
Additionally with this step a cross calibration between all spacecraft is achieved.
Absolute Calibration As a last step the spatially and temporally adjusted datasets are transformed from
digital number values into physical based radiance values (scaled to
W/(m²*str*µm)*100).
Visual Product Processing Presents the imagery as natural color, optimize colors as seen by the human eye.
This process is broken down into 3 steps:
● Nominalization - Sun angle correction, to account for differences in latitude and
time of acquisition. This makes the imagery appear to look like it was acquired at
the same sun angle by converting the exposure time to the nominal time (noon).
● Unsharp mask (sharpening filter) applied before the warp process.
● Custom color curve applied post warping.
Orthorectification Removes terrain distortions. This process is broken down into 2 steps:
● The rectification tiedown process wherein tie points are identified across the
source images and a collection of reference images (ALOS, NAIP,Landsat) and
RPCs are generated.
● The actual orthorectification of the scenes using the RPCs, to remove terrain
distortions. The terrain model used for the orthorectification process is derived
from multiple sources (Intermap, NED, SRTM and other local elevation datasets)
which are periodically updated. Snapshots of the elevation datasets used are
archived (helps in identifying the DEM that was used for any given scene at any
given point).
The figure below illustrates the processing chain and steps involved to generate each of RapidEye’s imagery
products.
For SkySat imagery products, the processing steps are listed in the table below.
Step Description
Darkfield/Offset Correction Corrects for sensor bias and dark noise. Master offset tables are created by
averaging ground calibration data collected across 5-10 degree temperature bins
and applied to scenes during processing based on the CCD temperature at
acquisition time.
Flat Field Correction Flat fields are created using cloud flats collected on-orbit post-launch. These fields
are used to correct image lighting and CCD element effects to match the optimal
response area of the sensor.
Camera Acquisition Parameter Determines a common radiometric response for each image (regardless of exposure
Correction time, TDI, gain, camera temperature and other camera parameters).
Inter Sensor Radiometric Cross calibrates the 3 sensors in each camera to a common relative radiometric
Response (Intra Camera) response. The offsets between each sensor are derived using on-orbit cloud flats
and the overlap regions between sensors on SkySat spacecraft.
Super Resolution A super-resolved image, SR, is the process of creating an improved resolution image
(Level 1B Processing) fusing information from low resolution images, with the created higher resolution
image being a better description of the scene.
Visual Product Processing Presents the imagery as natural color, optimizing colors as seen by the human eye.
Custom color curves applied post warping to deliver a visually appealing image.
Orthorectification Removes terrain distortions. This process is broken down into 2 steps:
The rectification tiedown process wherein tie points are identified across the source
images and a collection of reference images (NAIP, ALOS, Landsat, and high
resolution image chips) and RPCs are generated. The actual orthorectification of the
scenes using the RPCs, to remove terrain distortions. The terrain model used for the
orthorectification process is derived from multiple sources (SRTM, Intermap, and
other local elevation datasets) which are periodically updated. Snapshots of the
elevation datasets used are archived (helps in identifying the DEM that was used for
any given scene at any given point.
8.1.1 PlanetScope
As mentioned in earlier sections, the Ortho Tile data in the Planet API will contain metadata in
machine-readable GeoJSON and supported by standards-compliant GIS tools (e.g. GDAL and derivatives,
JavaScript libraries). See APPENDIX A for info on general product XML metadata.
The table below describes the GeoJSON metadata schema for PlanetScope Ortho Tile products:
epsg_code The identifier for the grid cell that the number
imagery product is coming from if the
product is an Ortho Tile (not used if Scene).
quality_category Metric for image quality. To qualify for string: “standard” or “test”
“standard” image quality an image must
meet the following criteria: sun altitude
greater than or equal to 10 degrees, off nadir
view angle less than 20 degrees, and
saturated pixels fewer than 20%. If the
image does not meet these criteria it is
considered “test” quality.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
Table 7-B: PlanetScope Ortho Tile Surface Reflectance GeoTIFF Metadata Schema
aot_status A text string indicating state of AOD Missing Data - Using Default AOT
retrieval. If no data exists from the source
used, a default value 0.226 is used
water_vapor_sourc Source of the water vapor data used for the mod09cma_nrt
e correction
8.1.2 RapidEye
The table below describes the GeoJSON metadata schema for RapidEye Ortho Tile products:
epsg_code The identifier for the grid cell that the number
imagery product is coming from if the
product is an Ortho Tile (not used if Scene)
item_type The name of the item type that models string (e.g “REOrthoTile”)
shared imagery data schema.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
The table below describes the metadata schema for Surface Reflectance products stored in the GeoTIFF header:
aot_status A text string indicating state of AOD Missing Data - Using Default AOT
retrieval. If no data exists from the source
used, a default value 0.226 is used
water_vapor_sourc Source of the water vapor data used for the mod09cma_nrt
e correction
8.2.1 PlanetScope
The table below describes the GeoJSON metadata schema for PlanetScope Ortho Scene products:
item_type The name of the item type that models string (e.g. “PSScene)
shared imagery data schema.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
The PlanetScope Ortho Scenes Surface Reflectance product is provided as a 16-bit GeoTIFF image with
reflectance values scaled by 10,000. Associated metadata describing inputs to the correction is included in a
GeoTIFF TIFFTAG_IMAGEDESCRIPTION metadata header as a JSON encoded string.
Table 7-F: PlanetScope Ortho Scene Surface Reflectance GeoTIFF Metadata Schema
aot_status A text string indicating state of AOD Missing Data - Using Default AOT
retrieval. If no data exists from the source
used, a default value 0.226 is used
water_vapor_sourc Source of the water vapor data used for the mod09cma_nrt
e correction
8.2.2 SkySat
The table below describes the GeoJSON metadata schema for SkySat Ortho Scene products:
camera_id The specific detector used to capture the String (e.g. “d1”, “d2”)
scene.
item_type The name of the item type that models string (e.g. “PSScene3Band”, ”SkySatScene”)
shared imagery data schema.
publishing_stage Stage of publishing for an item. Both "l1a" string (“preview”, “finalized”)
assets and SkySatScenes with
fast-rectification applied will have a
publishing_stage = "preview".
Fast-rectification refers to the initial
rectification of the orthorectified product, to
enable faster publication. Once
full-rectification is applied, all assets will be
updated to publishing_stage = "finalized
satellite_azimuth Angle from true north to the satellite vector number (0 - 360)
at the time of imaging, projected on the
horizontal plane in degrees.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
8.3.1 PlanetScope
The table below describes the GeoJSON metadata schema for PlanetScope Basic Scene products:
item_type The name of the item type that models string (e.g. “PSScene)
shared imagery data schema.
quality_category Metric for image quality. To qualify for string: “standard” or “test”
“standard” image quality an image must
meet the following criteria: sun altitude
greater than or equal to 10 degrees, off nadir
view angle less than 20 degrees, and
saturated pixels fewer than 20%. If the
image does not meet these criteria it is
considered “test” quality.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
8.3.2 RapidEye
The table below describes the GeoJSON metadata schema for RapidEye Basic Scene products:
acquired The time that image was taken in ISO 8601 string
format, in UTC.
updated The last time this asset was updated in the string
Planet archive. Images may be updated
after they are originally published
8.3.3 SkySat
The table below describes the GeoJSON metadata schema for SkySat Basic Scene products:
camera_id The specific detector used to capture the String (e.g. “d1”, “d2”)
scene.
publishing_stage Stage of publishing for an item. Both "l1a" string (“preview”, “finalized”)
assets and SkySatScenes with
fast-rectification applied will have a
publishing_stage = "preview".
Fast-rectification refers to the initial
rectification of the orthorectified product, to
enable faster publication. Once
full-rectification is applied, all assets will be
updated to publishing_stage = "finalized
quality_category Metric for image quality. To qualify for string (“standard”, “test”)
“standard” image quality an image must
meet a variety of quality standards, for
example: PAN motion blur less than 1.15
pixels, compression bits per pixel less than 3.
If the image does not meet these criteria it
is considered “test” quality.
satellite_azimuth Angle from true north to the satellite vector number (0 - 360)
at the time of imaging, projected on the
horizontal plane in degrees.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
8.4.1 SkySat
The table below describes the GeoJSON metadata schema for SkySat Ortho Collect products:
camera_id The specific detector used to capture the string (e.g. “d1”, “d2”)
scene.
item_type The name of the item type that models string (e.g. “PSScene3Band”, ”SkySatScene”)
shared imagery data schema.
publishing_stage Stage of publishing for an item. Both "l1a" string (“preview”, “finalized”)
assets and SkySatScenes with
fast-rectification applied will have a
publishing_stage = "preview".
Fast-rectification refers to the initial
rectification of the orthorectified product, to
enable faster publication. Once
full-rectification is applied, all assets will be
updated to publishing_stage = "finalized
quality_category Metric for image quality. To qualify for string (“standard”, “test”)
“standard” image quality an image must
meet a variety of quality standards, for
example: PAN motion blur less than 1.15
pixels, compression bits per pixel less than 3.
If the image does not meet these criteria it
is considered “test” quality.
satellite_azimuth Angle from true north to the satellite vector number (0 - 360)
at the time of imaging, projected on the
horizontal plane in degrees.
sun_azimuth Angle from true north to the sun vector number (0 - 360)
projected on the horizontal plane in
degrees.
Planet offers REST API access that allows listing, filtering, and downloading of data to anyone using a valid API
key. The metadata features described in this document are all searchable via our Data API and downloadable
via our Orders API.
Details on searching and ordering via Planet APIs are available in Planet’s Developer Center. Links are also
available below.
Planet Explorer is a web-based tool that can be used to search Planet’s catalog of imagery, view metadata, and
download full-resolution images. The interface and all of its features are built entirely on the externally available
Planet API.
1. View Timelapse Mosaics: A user can view Planet’s quarterly and monthly mosaics, and can zoom in up
to zoom level 12 (38 m / pixel per OpenStreetMap)
2. Search: A user can Search for any location or a specific area of interest by entering into the input box OR
by uploading a geometry file (Shapefile, GeoJSON, KML, or WKT).
3. Save Search: The Save functionality allows a user to save search criteria based on area of interest, dates,
and filters.
4. Filter: A user can filter by a specific date range and/or customizing metadata parameters (e.g. estimated
cloud cover, GSD).
5. Zoom and Preview Imagery: Zoom and Preview allows a user to zoom in or out of the selected area and
preview imagery.
6. View Imagery Details: A user can review metadata details about each imagery product.
7. Download: The Download icon allows a user to download imagery based on subscription type.
9. Imagery Compare Tool: The Compare Tool allows you to compare sets of Planet imagery from different
dates.
Planet will also enable additional functionality in the form of “Labs,” which are demonstrations of capability
made accessible to users through the GUI. Labs are active product features and will evolve over time based on
Planet technology evolution and user feedback.
As part of the Planet GUI, an administration and account management tool is provided. This tool is used to
change user settings and to see past data orders. In addition, users who have administrator privileges will be
able to manage users in their organization as well as review usage statistics.
The core functionality provided by account management tools are outlined below, and Planet may evolve
Account Management tools over time to meet user needs:
1. User Accounts Overview: Every user account on the Planet Platform is uniquely identified by an email
address. Each user also has a unique API key that can be used when interacting programmatically with
the Platform.
2. Organization and Sub-organization Overview: Every user on the Planet Platform belongs to one
organization. The Platform also supports “sub-organizations,” which are organizations that are attached
to a “parent” organization. An administrator of a parent organization is also considered an administrator
on all sub-organizations.
3. Account Privileges: Every user account on the Planet Platform has one of two roles: user or
administrator. An administrator has elevated access and can perform certain user management
operations or download usage metrics that are not available to standard users. An administrator of a
parent organization is also considered an administrator on all sub-organizations. Administrators can
enable or disable administrator status and enable or disable users’ access to the platform altogether.
4. Orders and Usage Review: This tool records all part orders made and allows users and administrators to
view and download past orders. Usage metrics are also made available, including imagery products
downloaded and bandwidth usage. Usage metrics are displayed for each individual API key that is part
of the organization.
Each file is described along with its contents and format in the following sections.
All PlanetScope Ortho Tile Products will be accompanied by a single general XML metadata file. This file
contains a description of basic elements of the image. The file is written in Geographic Markup Language (GML)
version 3.1.1 and follows the application schema defined in the Open Geospatial Consortium (OGC) Best
Practices document for Optical Earth Observation products version 0.9.3, see
http://www.opengeospatial.org/standards/gml.
The contents of the metadata file will vary depending on the image product processing level. All metadata files
will contain a series of metadata fields common to all imagery products regardless of the processing level.
However, some fields within this group of metadata may only apply to certain product levels. In addition, certain
blocks within the metadata file apply only to certain product types. These blocks are noted within the table.
The table below describes the fields present in the General XML Metadata file for all product levels.
Field Description
“metaDataProperty” Block
EarthObservationMetaData
downlinkedTo
archivedIn
processing
license
pixelFormat Number of bits per pixel per band in the product image file
“validTime” Block
TimePeriod
beginPosition Start date and time of acquisition for source image take used to create
product, in UTC
endPosition End date and time of acquisition for source image take used to create
product, in UTC
“using” Block
EarthObservationEquipment
platform
shortName Identifies the name of the satellite platform used to collect the image
instrument
shortName Identifies the name of the satellite instrument used to collect the image
resolution Spatial resolution of the sensor used to acquire the image, units in meters
acquistionParameters
orbitDirection The direction the satellite was traveling in its orbit when the image was
acquired
incidenceAngle The angle between the view direction of the satellite and a line
perpendicular to the image or tile center
illuminationAzimuthAngle Sun azimuth angle at center of product, in degrees from North (clockwise)
at the time of the first image line
azimuthAngle The angle from true north at the image or tile center to the scan (line)
direction at image center, in clockwise positive degrees.
spaceCraftView Angle Spacecraft across-track off-nadir viewing angle used for imaging, in
degrees with “+” being East and “-” being West
acquisitionDateTime Date and Time at which the data was imaged, in UTC. Note: the imaging
times will be somewhat different for each spectral band. This field is not
intended to provide accurate image time tagging and hence is simply the
imaging time of some (unspecified) part of the image.
“target” Block
Footprint
multiExtentOf
posList Position listing of the four corners of the image in geodetic coordinates in
the format:
ULX ULY URX URY LRX LRY LLX LLY ULX ULY
where X = latitude and Y = longitude
centerOf
geographicLocation
topLeft
topRight
bottomRight
“resultOf” Block
EarthObservationResult
browse
BrowseInformation
type Type of browse image that accompanies the image product as part of the
ISD
referenceSystemIdentifier Identifies the reference system used for the browse image
product
spatialReferenceSystem
epsgCode EPSG code that corresponds to the datum and projection information of
the image
geodeticDatum Name of datum used for the map projection of the image
resamplingKernel Resampling method used to produce the image. The list of possible
algorithms is extendable
rowGsd The GSD of the rows (lines) within the image product
columnGsd The GSD of the columns (pixels) within the image product
radiometricCorrectionApplied Indicates whether radiometric correction has been applied to the image
atmosphericCorrectionApplied Indicates whether atmospheric correction has been applied to the image
atmosphericCorrectionParameters
mask
MaskInformation
type Type of mask file accompanying the image as part of the ISD
referenceSystemIdentifier EPSG code that corresponds to the datum and projection information of
the mask file
The following group is repeated for each spectral band included in the image product
bandSpecificMetadata
percentSuspectLines Percentage of suspect lines (lines that contained downlink errors) in the
source data for the band
radiometricScaleFactor Provides the parameter to convert the scaled radiance pixel value to
radiance Multiplying the Scaled Radiance pixel values by the values,
derives the Top of Atmosphere Radiance product. This value is a constant,
set to 0.01
reflectanceCoefficient The value is a multiplicative, when multiplied with the DN values, provides
the Top of Atmosphere Reflectance values
The remaining metadata fields are only included in the file for L1B RapidEye Basic products
spacecraftInformationMetadataFile Name of the XML file containing attitude, ephemeris and time for the 1B
image
rpcMetadataFile Name of XML file containing RPC information for the 1B image
MaskInformation
type Type of mask file accompanying the image as part of the ISD
referenceSystemIdentifier EPSG code that corresponds to the datum and projection information of
the mask file
The following group is repeated for each spectral band included in the image product
bandSpecificMetadata
percentSuspectLines Percentage of suspect lines (lines that contained downlink errors) in the
source data for the band
radiometricScaleFactor Provides the parameter to convert the scaled radiance pixel value to
radiance Multiplying the Scaled Radiance pixel values by the values,
derives the Top of Atmosphere Radiance product. This value is a constant,
set to 0.01
reflectanceCoefficient The value is a multiplicative, when multiplied with the DN values, provides
the Top of Atmosphere Reflectance values
targetMeasure The physical unit that the harmonization transform is valid for
The remaining metadata fields are only included in the file for L1B RapidEye Basic products
spacecraftInformationMetadataFile Name of the XML file containing attitude, ephemeris and time for the 1B
image
rpcMetadataFile Name of XML file containing RPC information for the 1B image
The General XML Metadata file will follow the naming conventions as in the example below.
Example: 2328007_2010-09-21_RE4_3A_visual_metadata.xml
The unusable data mask file provides information on areas of unusable data within an image (e.g. cloud and
non-imaged areas).
The pixel size after orthorectification will be 3.125 m for PlanetScope OrthoTiles, 3.0m for PlanetScope Scenes, 50
m for RapidEye, and 0.8 m for SkySat. It is suggested that when using the file to check for usable data, a buffer
of at least 1 pixel should be considered. Each bit in the 8-bit pixel identifies whether the corresponding part of
the product contains useful imagery:
● Bit 0: Identifies whether the area contains blackfill in all bands (this area was not imaged). A value of “1”
indicates blackfill.
● Bit 1: Identifies whether the area is cloud covered. A value of “1” indicates cloud coverage. Cloud
detection is performed on a decimated version of the image (i.e. the browse image) and hence small
clouds may be missed. Cloud areas are those that have pixel values in the assessed band (Red, NIR or
Green) that are above a configurable threshold. This algorithm will:
● Bit 2: Identifies whether the area contains missing (lost during downlink) or suspect (contains down-
link errors) data in band 1. A value of “1” indicates missing/suspect data. If the product does not include
this band, the value is set to “0”.
● Bit 3: Identifies whether the area contains missing (lost during downlink and hence blackfilled) or
suspect (contains downlink errors) data in the band 2. A value of “1” indicates missing/suspect data. If
the product does not include this band, the value is set to “0”.
● Bit 5: Identifies whether the area contains missing (lost during downlink) or suspect (contains downlink
errors) data in band 4. A value of “1” indicates missing/suspect data. If the product does not include this
band, the value is set to “0”.
● Bit 6: Identifies whether the area contains missing (lost during downlink) or suspect (contains downlink
errors) data in band 5. A value of “1” indicates missing/suspect data. If the product does not include this
band, the value is set to “0”.
The UDM information is found in band 8 of the Usable Data Mask file.
The usable data mask file provides information on areas of usable data within an image (e.g. clear, snow,
shadow, light haze, heavy haze and cloud).
The pixel size after orthorectification will be 3.125 m for PlanetScope OrthoTiles and 3.0m for PlanetScope
Scenes. The usable data mask is a raster image having the same dimensions as the image product, comprised
of 8 bands, where each band represents a specific usability class mask. The usability masks are mutually
exclusive, and a value of one indicates that the pixel is assigned to that usability class.
● Band 1: clear mask (a value of “1” indicates the pixel is clear, a value of “0” indicates that the pixel is not
clear and is one of the 5 remaining classes below)
● Band 2: snow mask
● Band 3: shadow mask
● Band 4: light haze mask
● Band 5: heavy haze mask
● Band 6: cloud mask
● Band 7: confidence map (a value of “0” indicates a low confidence in the assigned classification, a value
of “100” indicates a high confidence in the assigned classification)
● Band 8: unusable data mask
File Naming
The UDM2 file will follow the naming conventions as in the example below.
An Ortho Tile imagery products is named by the UTM zone number, the grid row number, and the grid column
number within the UTM zone in the following format:
<ZZRRRCC>
Where:
ZZ = UTM Zone Number (This field is not padded with a zero for single digit zones in the tile
shapefile)
RRR = Tile Row Number (increasing from South to North, see Figure B-2)
CC = Tile Column Number (increasing from West to East, see Figure B-2)
Tile 3363308 = UTM Zone = 33, Tile Row = 633, Tile Column = 08
Figure B-3: Illustration of grid layout of Rows and Columns for a single UTM Zone
The center point of the tiles within a single UTM zone are defined in the UTM map projection to which standard
transformations from UTM map coordinates (x,y) to WGS84 geodetic coordinates (latitude and longitude) can
be applied.
col = 1..29
row = 1..780
Xcol = False Easting + (col –15) x Tile Width + Tile Width/2
Yrow = (row – 391) x Tile Height + Tile Height/2
The numbers 15 and 391 are needed to align to the UTM zone origin.