Satellite Photogrammetry: Presented By-Sumit Singh (20520010) Sourav Sangam (20520008) Parth Solanki (20520004)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 46

SATELLITE PHOTOGRAMMETRY

Presented by-Sumit Singh (20520010)


Sourav Sangam (20520008)
Parth Solanki (20520004)
CONTENTS
1. Introduction

2. Data Acquisition

3. Data Processing
Terminologies
Data Processing Models

4. Data Presentation

5. Advantages/Disadvantages
Introduction
Classification of Photogrammetry :
1. Terrestrial / Close range
2. Aerial
3. Space / Satellite

Satellite Photogrammetry :
If the sensing system is space borne, it is called space photogrammetry, satellite photogrammetry
or extra-terrestrial photogrammetry.

Twin Branches
•Space Photogrammetry – metric
•Remote-sensing- thematic

How it all started?

SPOT 1 launched February 22, 1986 with 10 panchromatic and 20 meter multispectral picture
resolution capability.
First satellite to have stereoscopic capabilities.

Reason:To map alps ranges.

Not successful as the errors in height information was of large amount.


Though it was not successful it started a revolution in the field of satellite photogrammetry.
Operational satellite timeline (high resolution)

WORLDVIEW 2 WORLDVIEW 3
WORLDVIEW 1 (2007)
IKONOS (1999) (2009) (2014)
0.5 m
0.8 – 3.2 m 0.5 m 0.31 m

QUICKBIRD CARTOSAT 1 GEOEYE PLEIADES SPOT 7 GEOEYE 2


(2001) (2007) (2008) (2011) (2014) (2016)
0.6 – 2.4 m 2.5m 0.5 m-2.0m 0.5 m 1.5 m Worldview- 4
(2016)
0.31 m
High resolution satellites
Satellite Developer Resolution Revisit interval Sensor footprint

SPOT 6 & 7 SPOT Image, France 1.5 m Daily 60x60 sq.km


(2014)

WorldView- 4 Digitalglobe, California, 0.31m < 1day 26.6kmx112km


(2016) U.S.A

GeoEye -1 Digitalglobe, California, 0.46 m 10 days 224 x 28 sq.km


(2008) U.S.A

Pleiades-1A ORFEO programme, France 0.5 m Daily 1000 x 1000 sq.km


(2011)

TripleSat 21AT, China 0.8m Daily 42km x 47.2 km


(2015)

KOMPSAT-3A ISC, Russia 0.55m Daily 12kmx12km


(2015)
High resolution satellites
Satellite Developer Resolution Revisit interval Sensor footprint

IKONOS Digitalglobe, California, 0.82m 3 days 11.3x11.3 sq.km


(1999 - 2015) U.S.A

Gaofen-2 China Academy of Space 0.8m 5 days 45kmx45km


(2014) Technology(CAST)

CartoSat -1 ISRO, India 2.5 m 5 days 30 x 30 sq.km


(2005)

TH-1-03 Beijing Space Eye 5m 5 days 60km x 60 km


(2015) Innovation Tech Co. Ltd. (Triple Stereo)
(BSEI), China

QuickBird Digitalglobe, California, 0.65m 1 - 3.5 days 18kmx18km


(2001-2015) U.S.A
DATA ACQUISITION
General Workflow
Sensor Types
Sensors
A push broom scanner (along track scanner) is a technology for obtaining images
with spectroscopic sensors.
It is regularly used for passive remote sensing from space and in spectral analysis on production lines, for
example with near-infrared spectroscopy

A whisk broom or spotlight sensor (across track scanner) is a technology for obtaining satellite images
with optical cameras.
In a whisk broom sensor, a mirror scans across the satellite’s path (ground track), reflecting light into a
single detector which collects data one pixel at a time.

The advantage of along track stereo images compared with images that are taken from
adjacent orbits (across track) is that they are acquired in almost the same ground and
atmospheric conditions.

1. Whisk broom 2. Push broom


DATA acquisition
Nadir Imaging

Source: Masterclass : photogrammetric processing of pushbroom satellite systems, Petr S. Titarov, Nessabar, Bulgaria
DATA acquisition
Off Nadir Imaging

Source: Masterclass : photogrammetric processing of pushbroom satellite systems, Petr S. Titarov, Nessabar, Bulgaria
Source: Masterclass : photogrammetric processing of pushbroom satellite systems, Petr S. Titarov, Nessabar, Bulgaria
Source: Masterclass : photogrammetric processing of pushbroom satellite systems, Petr S. Titarov, Nessabar, Bulgaria
Source: Masterclass : photogrammetric processing of pushbroom satellite systems, Petr S. Titarov, Nessabar, Bulgaria
Satellite topographic mapping
• Stereo satellite images are captured
consecutively by a single satellite along the
same orbit within a few seconds (along the
track imaging technique) or by the same
satellite (or different satellites) from different
orbits in different dates (across the track
imaging technique).
• The base-to-height (B/H) ratio should be close
to 1 for high-quality stereo model with high
elevation accuracy.
• Satellites : Carto-sat1, CHRIS/PROBA, EROS-A, Different orbits
IRS, IKONOS, MOMS-02, SPOT, and Terra ASTER

• Stereo data can be collected on same orbit, or


different orbits(beware of changes)
• Satellite may have to be rotated to point sensor
correctly
• Optimum base to height ratio is 0.6 to 1.0
• Atmospheric effects (refraction, optical
thickness) become more significant at higher
look angles Same orbit
Inclination Angle of a Stereoscene
• C = center of the scene
• I- = eastward inclination
• I+ = westward inclination
• O1,O2= exposure stations (perspective centers of
imagery
Nadir and Off-Nadir
• The scanner can produce a nadir view. Nadir is the point
directly below the camera. SPOT has off-nadir viewing
capability.
• Off-nadir refers to any point that is not directly beneath the
satellite, but is off to an angle (that is, East or West of the
nadir), as shown in fig:
Tri-stereo Imagery

The Pleiades-1A and Pleiades-1B Satellite sensors can be programmed to collect Tri-Stereo
Imagery for the production of high quality 1m-2m DEM's for 3D Urban and Terrain modelling.
The Tri-Stereo acquisitions reveal elevation that would otherwise remain hidden in steep terrain
or urban canyons in dense built-up areas.

Advantage: Less probability of occlusions, which is a common in the dense urban and forested
areas.
Image acquisition methodology for SPOT Satellite
 The satellites collect the images by scanning
along a line which is called the scan line.

 For each line scanned by the sensors of the


satellites there is a unique perspective center
and a unique set of rotation angles.

 The location of the perspective center relative


to the scan line is constant for each line as the
interior orientation parameters and focal
length are constant for a given scan line.

 Since the motion of the satellite is smooth and


linear over the entire length of the scene, the
perspective centers of all scan lines in a scene
are assumed to lie along a smooth line
Satellites
• SPOT (Satellite Pour l'Observation de la Terre / Satellite for observation of Earth)
• The SPOT satellite carries two high resolution visible (HRV) sensors, each of which is a
pushbroom scanner.
• The focal length of the camera optic is 1084 mm, length of the camera is 78mm.
• The Instantaneous Field of View(IFOV) is 4.1 degrees.
• The satellite orbit is circular, north-south and south-north, about 830 km above the Earth,
and sun-synchronous.
• A sun-synchronous orbit is one in which the orbital rotation is the same rate as the Earth’s
rotation
• Resolution of the images is 10m panchromatic and 20m multispectral.

• IRS-1C (Indian Remote Sensing)


• The IRS-1C satellite has a push broom sensor consisting of three individual CCDs.
• The ground resolution of the imagery ranges between 5 to 6 meters.
• The focal length of the optic is approximately 982 mm.
• The pixel size of the CCD is 7 microns.
• The images captured from the three CCDs are processed independently or merged into one
image and system corrected to account for the systematic error associated with the sensor.
DATA PROCESSING
Modelling Satellite Sensor Orientation
Based on the requirement of final output and data products accuracy, data processing techniques can be
selected and used.

Defining of camera or sensor model involves establishing the geometry of the camera/sensor as it
existed at the time of image acquisition.

Modelling satellite sensor motion & orientation in space is one of the preliminary tasks that should be
performed for using satellite image data for any application.

The orientation of the images is the fundamental step and its accuracy is a crucial issue during the
evaluation of the entire system

General mathematical models for satellite sensor modelling are :


 Rigorous or physical sensor model
 Rational Function Model (RFM),
 Direct Linear Transformation (DLT)
 3D polynomial model , and
 3D affine model.
Satellite scene coordinates

A • A = Origin of file coordinates


Xf
• A-XF, A-YF= file coordinate
axes
• C = Origin of image
6000 Rows

C coordinates (centre of scene)


• C-x, C-y= Image coordinate
axes

6000 Columns (pixels)


Yf
Orientation Process
1. Interior orientation

 Satellite sensors such as SPOT, IRS-1C, and other generic pushbroom sensors
use perspective center for each scan line, the process is referred to as internal
sensor modelling.

 It defines the internal geometry of a camera or sensor as it existed at the time of


capture.

 It expresses the angular relationship between object space rays based on the
location of the image points in object space.

 It is primarily used to transform the image pixel coordinate system or other


image coordinate system to the image space coordinate system

 In a satellite image, the Interior Orientation parameters are:


 Principal point on the image
 Focal length of the camera
 Optics parameters
For each scan line, a separate bundle of light rays
is defined

Pk = image point
xk = x value of image coordinates for scan line k
f = focal length of the camera
Ok = perspective centre for scan line k, aligned along
the orbit
PPk = principal point for scan line k
lk = light rays for scan line, bundled at perspective
centre Ok
2. Exterior orientation

 Exterior orientation (EO) defines the position and angular


orientation of the camera when the image was captured.
 It represents a transformation from the ground coordinate system
to the image coordinate system.
 Now-a-days most of the cameras are equipped with onboard GPS
and sometimes with an Inertial navigation system (INS) which
collects the EO parameter directly on the plane.
 Exterior orientation parameters are:
 Perspective center of the center scan line
 Change of perspective centers along the orbit
 Rotation of the center scan line: roll, pitch and yaw.
 Change of the angles along the orbit
Elements of Exterior Orientation (EO)

 The elements of exterior orientation define the characteristics


associated with an image at the time of exposure.
 It comprises of position and attitude. On board GPS receiver
determine the position and star trackers and gyros determine the
camera attitude as function of time.
 The positional elements Xo, Yo, and Zo define the position of the
perspective centre (O) with respect to the ground space coordinate
system (X, Y, and Z).
 The angular or rotational elements , omega (ω), phi (ϕ), and kappa
(κ) describe the relationship between the ground space coordinate
system (X, Y, and Z) and the image space coordinate system (x, y, and
z)
Orientation Angle
• The orientation angle of a satellite scene is the angle between a
perpendicular to the center scan line and the North direction.

Velocity vector
• The spatial motion of the satellite is described by the velocity vector.
The real motion of the satellite above the ground is further distorted
by the Earth’s rotation.

• The velocity vector of a satellite is the satellite’s velocity if measured


as a vector through a point on the spheroid.

• It provides a technique to represent the satellite’s speed as if the The diagram depicts the relation
imaged area were flat instead of being a curved surface between orientation angle and
velocity vector of a single scene.
O = orientation angle
C = center of the scene
V = velocity vector
Triangulation
 Satellite block triangulation provides a model for calculating the spatial relationship
between a satellite sensor and the ground coordinate system for each line of data
 This relationship is expressed as the exterior orientation
 In addition to fitting the bundle of light rays to the known points, satellite block
triangulation also accounts for the motion of the satellite
 once the exterior orientation of the center scan line is determined, the exterior
orientation of any other scan line is calculated based on the distance of that scan
line from the center and the changes of the perspective center location and rotation
angles
 Both GCPs and tie points can be used for satellite block triangulation of a stereo
scene.
 For triangulating any single scene, only GCPs are used. In this case, space resection
techniques are used to compute the exterior orientation parameters associated with
the satellite.
Ideal Point Distribution Over a Satellite
 A minimum of six GCPs is necessary but 10 or more GCPs are recommended to Scene for Triangulation
obtain a good triangulation result.
 The effects of the Earth’s curvature are significant and are removed during block
triangulation procedure.
DATA Processing Models

 Sensor model is required to build the relationship between the three-dimensional (3D) object space
and two-dimensional (2D) image space of high-resolution satellite imagery (HRSI).
Parametric or Rigorous sensor models (RSM)
 Physical Camera Models : precise
 Actual formation of scene at the time of photography
 Exact delineation of geometry (object to image space)
 Complex, all elements required are not available
Non-Parametric or General sensor Model (GSM)
 Transformation b/w image & object space through some general functions
 Dose not include physical imaging process
 Approximate models / replacement models/ General models
 Commonly used Models are:
• Rational Function Models
• Linear Direct Transfer Models
• 2D and 3D Affine Transformation Models
Physical Sensor Model (rigorous model)

 The physical sensor model aims to describe the relationship between image and ground
coordinates, according to the physical properties of the image acquisition.
 It can be formulated using basics of the collinearity equations that describe the relationship
between a point on the ground and its corresponding location on the image.
 The collinearity equations should be written for every scanned line on the image using linear
array sensors.

Rational Function Model (RFM)

 The Rational Function Model (RFM) is an empirical mathematical model that has been
developed to approximate the relationship between the image and object spaces.
 A number of GCP are normally used to improve the accuracy obtained by the RFM.
Direct Linear Transformation (DLT)
 The Direct Linear Transformation (DLT) is a method of determining the three dimensional
location of an object in space using two views of the object.
 It maps each point of the 3D world to a point of the 2D image through a projection
operation.
 To locate the position of an unknown point in space using DLT, first calibrate the system using
at least 6 GCP, then the position of unknown points is found using calibration matrix.

3D polynomial model
 The 3D model is used to model the relationship between the image and the object spaces.
 Here, the choice of the polynomial order depends on the type of terrain, available number of
GCP, and the stability of the satellite sensor in space.

3D affine model
 The 3D affine model can be performed by limiting the polynomial model to the first order.
 It has high integrity to represent the relationship between the image and the object spaces,
especially when the model is applied to data obtained from highly stable satellite sensors.
Rational Polynomial Coefficient (RPC) Model

 Rational Polynomial Coefficients (RPCs) provide a compact representation of a ground-to-image


geometry, allowing Photogrammetric processing without requiring a physical camera model.

 The RPC model forms the co-ordinates of the image point as ratios of cubic polynomials in the
co-ordinates of the world or object space or ground point.

 A set of images is given to determine the set of polynomial coefficients in the RPC model to
minimize the error".

 An RPC model is the ratio of two polynomials which can be derived from the rigorous sensor
model and the corresponding terrain information, which does not reveal the sensor parameters.

 High resolution satellite image vendors provide a RPC file with the image. This file consist of RPC
coefficients which is used to relate coordinate in a sensor plane (2D) to object coordinate (3D).
 X= Num l (x, y, z)/ Den l (x, y, z), Y = Num s (x, y, z)/ Den s (x, y, z) Where l and s are normalized
line and sample ( row and column of 2D image in sensor plane) and x, y, z are normalized
latitude, longitude and height.
Coefficients :

called Rational Polynomial


Coefficients

Advantages of RPC model:


•Computationally less intensive
•RPCs allowed sensor & camera model data to remain confidential
• General sensor independent model
DATA Presentation
Digital Elevation Model
• Digital representation of elevations in a region is commonly referred to as a digital
elevation model(DEM).
• When the elevations refer to the earth’s terrain, it is appropriately referred to as a
digital terrain model (DTM).
• When considering elevations of surfaces at or above the terrain (tree crowns, rooftops,
etc.), it can be referred to as a digital surface model (DSM).
Digital Elevation Model Uses of DEM
Procedure for DEM generation from stereoscopic views can be Common uses of Elevation Models include:
summarized as follows (Shin et al., 2003): • Extracting terrain parameters
1. Feature selection in one of the scenes of a stereo-pair:
• Volume Calculations
Selected features should correspond to an interesting
phenomenon in the scene and/or the object space. • Modelling water flow or mass movement (for
2. Identification of the conjugate feature in the other scene: example, landslides)
This problem is known as the matching/correspondence • Creation of relief maps
problem within the photogrammetric and computer vision
communities. • Rendering of 3D visualizations
3. Intersection procedure: Matched points in the stereo-scenes • Creation of physical models (including raised-
undergo an intersection procedure to produce the ground relief maps)
coordinates of corresponding object points. The intersection
process involves the mathematical model relating the scene • Orthorectification
and ground coordinates. • Reduction (terrain correction) of gravity
4. Point densification: High density elevation data is generated measurements
within the area under consideration through an • Terrain analysis in geomorphology and physical
interpolation in-between the derived points in the previous
step.
geography
Orthorectification
Ortho-rectified image is generally defined as
image that has been geometrically corrected
for displacements caused by terrain and
relief.

General sources of geometric errors :


• camera and sensor orientation
• systematic error of the camera /sensor
• topographic relief displacement
• Earth curvature.

Least squares adjustment techniques during


block triangulation minimizes the errors
associated with camera or sensor instability.

In an ortho-rectified image, the projecting


rays are perpendicular to the plane of
projection and hence, any part of the object
that is parallel to the plane of projection will
appear in its proper shape and correct scale. Raw Image Orthorectfied Image
Orthorectification transforms the DEM to 2-D
and by putting contour layer and features
layer over it, we can interpret it as a map.
Orthorectification
• The effects of topographic relief displacement are accounted for by utilizing a DEM during the
orthorectification procedure.
• The orthorectification process takes the raw digital imagery and applies a DEM and
triangulation results to create an orthorectified image.
• Once an orthorectified image is created, each pixel within the image possesses geometric
fidelity.
• Measurements taken off an orthorectified image represent the corresponding measurements
as if they were taken on the Earth’s surface
• The resulting orthorectified image is known as a digital orthoimage.
Why satellite images are processed differently from aerial photographs?

• In satellite images, there is unique set of inner orientation parameters (focal length, rotation
angles, etc.) so a satellite image has varying focal length throughout the image while in case
of aerial photograph, The focal length and rotation is constant for a particular scene.
• Satellite images cover a large area as compared to aerial photographs , so 3 rd order
polynomial transformations are applied to orient two scenes as compared to 1 st order
polynomial transformations in aerial photographs.
• So we require a different system to process satellite images.
Advantages/ Disadvantages
Advantages of using Satellite as the platform
• High altitude with attendant wide coverage
• Freedom from aerodynamic motion which attend heavier than aircraft
• Weightlessness which permits large , rigid orbiting cameras to be constructed with less mass than would be
required in a conventional aircraft
• The process of photographing of the land surface is continuous and lasting, thus the most appropriate image
could be chosen.
• The formalities for aerial photography and flight arrangement are avoided here.
• The use of satellite images is considerably less expensive than the aerial pictures.
• The resulting opportunity to use cameras which can be unfolded or extended to large sizes with long focal
lengths
• The opportunity to photograph areas of earth that are accessible only with difficulty with conventional aircraft.

Disadvantages of using Satellite as the platform


• Necessity of operating the camera in space environment (e.g. Vacuum, temperature , radiation , micrometeorite
hazards )
• Problems of image motion compensation because of high speed of satellite
• Difficulty of recovering photographic films from satellite or the necessity to telemeter the photographic
information to the ground
• Inertial disturbances of the orientation and stability of the camera platform caused by non-compensated
motions of mechanical parts in camera-satellite system
Comparision -
References

1. ZHANG Guo,YUAN Xiuxiao, On RPC Model of Satellite Imagery, Geospatial information, vol.9 ,
issue 4

2. Albertz, Jeorgz, Mapping from Space - Cartographic Applications of Satellite Image Data,
Geojournal 32.1 29-37

3. Paul rosenberg, Earth satellite Photogrammetry, Mount Vernon, N.Y

4. A Collection of Lecture Notes on Satellite Photogrammetry-Dr. R. Nandakumar

5. https://www.satimagingcorp.com/satellite-sensors/
THANK YOU

You might also like