Satellite Photogrammetry (NEW)

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 54

E

E
M
T
I
M
L
A
L
E
R
T OG
A
S OT
PH

Y
TR

By
Vishal Mishra
Enroll. No. 15520016

W H AT?

Introduction
Photogrammetry as classified on the

basis of sensor-platform:
Terrestrial or Close Range
Aerial
Satellite or Space
If the sensing system is space borne,
it is called space photogrammetry,
satellite photogrammetry or extraterrestrial photogrammetry.

Introduction
Satellite Photogrammetry has slight

variations compared to
photogrammetric applications
associated with aerial frame cameras.
The images are taken with highresolution CCD cameras coupled large
lenses to take pictures of the ground
right below them as they pass over.
The amount of information dealt with
is very large as images of very large
scenes are taken from the satellites.

Introduction
These satellites are capable of obtaining

and relaying very large volumes of


imagery data.
The satellite data used for
photogrammetric purposes is from sunsynchronous satellites generally.

H ow it is diff
erent from Rem ote
Sensing?
Twin branches
Space Photogrammetry is metric in

nature whereas Remote-sensing is


thematic in nature
Interpretative Photogrammetry forms
the basis of Remote Sensing

Exam ples

Extraterrestr

ial
Ikonos
Image

W H Y?

Advantages:Satellite Im ages
The process of photographing of the
land surface is continuous lasting for
a period of 4 days(in reference to
Quickbird ). Owing to this the most
appropriate image was chosen.
The formalities for aerial
photography and flight arrangement
are avoided here.
The use of satellite images is
considerably less expensive than the

Advantages:Satellite Platform
High altitude with attendant wide

coverage
Freedom from aerodynamic motion
which attend heavier than aircraft
Weightlessness which permits large ,
rigid orbiting cameras to be
constructed with less mass than
would be required in a conventional
aircraft

Advantages:Satellite Platform
The resulting opportunity to use

cameras which can be unfolded or


extended to large sizes with long
focal lengths
And the opportunity to photograph
areas of earth that are accessible
only with difficulty with conventional
aircraft.

D isadvantages :Satellite
Platform

Necessity of operating the camera in space

environment (e.g. Vacuum, temperature ,


radiation , micrometeorite hazards )
Difficulty of recovering photographic films
from satellite or the necessity to telemeter
the photographic information to the ground.
Problems of image motion compensation
because of high speed of satellite
Inertial disturbances of the orientation and
stability of the camera platform caused by
non-compensated motions of mechanical
parts in camera-satellite system

HOW ?

G eneralW orkfl
ow

D ATA ACQ U SITIO N

Sensor Types

Sensor Types

Apush broom scanner(along track scanner) is

a technology for obtaining images


withspectroscopicsensors.
It is regularly used for passive remote sensing
from space and in spectral analysis on production
lines, for example withnear-infrared
spectroscopy
Awhisk broomor spotlight sensor (across track
scanner) is a technology for
obtainingsatelliteimages with optical cameras.
In a whisk broom sensor, a mirror scans across
thesatellites path (ground track), reflecting light
into a singledetectorwhich collects data
onepixelat a time.

Sensor Types
The advantages of along track stereo

images compared with images that are


taken from adjacent orbits (across track)
are that they are acquired in almost the
same ground and atmospheric conditions.

Satellites
The SPOT satellite carries two high

resolution visible (HRV) sensors, each of


which is a pushbroom scanner.
The focal length of the camera optic is 1084
mm, length of the camera is 78mm.
The Instantaneous Field of View(IFOV) is 4.1
degrees.
The satellite orbit is circular, north-south
and south-north, about 830 km above the
Earth, and sun-synchronous.
A sun-synchronous orbit is one in which the
orbital rotation is the same rate as the
Earths rotation
Resolution of the images is 10m

Satellites
The IRS-1C satellite has a pushbroom sensor

consisting of three individual CCDs.


The ground resolution of the imagery ranges
between 5 to 6 meters.
The focal length of the optic is approximately
982 mm.
The pixel size of the CCD is 7 microns.
The images captured from the three CCDs are
processed independently or merged into one
image and system corrected to account for the
systematic error associated with the sensor.

Im age acquisition m ethodology


The satellites collect the images by scanning along

a line which is called the scan line.


For each line scanned by the sensors of the
satellites there is a unique perspective center and
a unique set of rotation angles.
The location of the perspective center relative to
the scan line is constant for each line as the
interior orientation parameters and focal length are
constant for a given scan line.
Since the motion of the satellite is smooth and
linear over the entire length of the scene, the
perspective centers of all scan lines in a scene are
assumed to lie along a smooth line

Rotation angles

Perspective Centre

Satellite Scene
The satellite exposure station is defined as the perspective

center in ground coordinates for the center scan line.


The image captured by the satellite is called a scene.
SPOT Pan 1A scene is composed of 6000 lines. For SPOT
Pan 1A imagery, each of these lines consists of 6000
pixels.
Each line is exposed for 1.5 milliseconds, so it takes 9
seconds to scan the entire scene.
A single pixel in the image records the light detected by
one of the 6000 light sensitive elements in the camera.
The physical dimension of a single CCD is 13x13 microns.
Each pixel is defined by a file coordinate Column number
and row number.
The center of the scene is the center pixel of the scan line.
This center is the origin of the image coordinate system.
The following figure depicts them:-

Satellite Scene

A = origin of file
coordinates
A-XF, A-YF=file
coordinate axes
C = origin of image
coordinates (center
of scene)
C-x, C-y=image
coordinate axes

Satellite D ata
The header of the data file of a SPOT scene

contains ephemeris data, which provides


information about the recording of the data and
the satellite orbit.
The data provided is:
Position of the satellite in geocentric
coordinates (with the origin at the center of the
Earth) to the nearest second
Velocity vector of the camera
Rotational velocity of the camera.
Attitude changes of the camera.
Exact time of exposure of the center scan line
of the scene.
The data obtained is converted to local ground
system for the triangulation.

O rientation Angle and Velocity


Vector
Orientation Angle
The orientation angle of a satellite scene is the angle

between a perpendicular to the center scan line and the


North direction .
Velocity vector
The spatial motion of the satellite is described by the
velocity vector. The real motion of the satellite above the
ground is further distorted by the Earths rotation.
The velocity vector of a satellite is the satellites velocity
if measured as a vector through a point on the spheroid.
It provides a technique to represent the satellites speed
as if the imaged area were flat instead of being a curved
surface

O rientation Angle and Velocity


Vector
The adjacent

diagram depicts the


relation between
orientation angle
and velocity vector of
a single scene.
O = orientation
angle
C = center of the
scene
V = velocity vector

Satellite topographic m apping


Stereo satellite images are captured

consecutively by a single satellite along the


same orbit within a few seconds (along the track
imaging technique) or
by the same satellite (or different satellites)
from different orbits in different dates (across
the track imaging technique).
the base-to-height (B/H) ratio should be close to
1 for high-quality stereo model with high
elevation accuracy.
Satellites : Carto-sat1, CHRIS/PROBA, EROS-A,
IRS, IKONOS, MOMS-02, SPOT, and Terra ASTER

Satellite topographic m apping


Stereo data can be

collected on same orbit, or


different orbits
(beware of changes)
Satellite may have to be

rotated to point sensor


correctly

Different orbits

Optimum base to height

ratio is 0.6 to 1.0


Atmospheric effects

(refraction, optical
thickness) become more
significant at higher look
angles

Same orbit

Satellite topographic m apping


Light rays in a bundle defined by the SPOT sensor are almost

parallel, lessening the importance of the satellites position.


The inclination angles (incidence angles) of the cameras
onboard the satellite become the critical data.
Inclination is the angle between a vertical on the ground at the
center of the scene and a light ray from the exposure station.
This angle defines the degree of off-nadir viewing when the
scene was recorded.
The cameras can be tilted in increments of a minimum of 0.6 to
a maximum of 27 degrees to the east (negative inclination) or
west (positive inclination).
A stereo scene is achieved when two images of the same area
are acquired on different days from different orbits, one taken
East of the other. For this to occur, there must be significant
differences in the inclination angles.

Inclination Angle ofa


Stereoscene

C = center of the

scene
I- = eastward
inclination
I+ = westward
inclination
O1,O2= exposure
stations
(perspective
centers of imagery

N adir and O ff
-N adir
The scanner can produce a nadir view. Nadir is

the point directly below the camera. SPOT has


off-nadir viewing capability.
Off-nadir refers to any point that is not directly
beneath the satellite, but is off to an angle (that
is, East or West of the nadir), as shown in fig:

Tri-stereo Im agery

The Pleiades-1A and Pleiades-1B Satellite sensors can be

programmed to collect Tri-Stereo Imagery for the production of high


quality 1m-2m DEM's for 3D Urban and Terrain modeling. The TriStereo acquisitions reveal elevation that would otherwise remain
hidden in steep terrain or urban canyons in dense built-up areas.

D ATA PRO CESSIN G

M O D ELLIN G SATELLITE
SEN SO R O R IEN TATIO N

Defining the camera or sensor model involves establishing the

geometry of the camera/sensor as it existed at the time of image


acquisition.
Modelling satellite sensor motion & orientation in space is one of the
preliminary tasks that should be performed for using satellite image
data for any application.
The orientation of the images is a fundamental step and its accuracy
is a crucial issue during the evaluation of the entire system
For pushbroom sensors the triangulation and photogrammetric point
determination are rather different compared to standard approaches,
and require special investigations on the sensor geometry and the
acquisition mode
For geo-referencing of imagery acquired by pushbroom sensors,
geometric models have been developed
General mathematical models for satellite sensor modelling are used:
Rigorous or physical sensor model
Rational Function Model (RFM),
Direct Linear Transformation (DLT)
3D polynomial model , and

M O D ELLIN G SATELLITE
SEN SO R O R IEN TATIO N
The physical sensor model aims to describe the relationship

between image and ground coordinates, according to the


physical properties of the image acquisition
Physical sensor model (rigorous model) can be formulated
using basics of the collinearity equations that describe the
relationship between a point on the ground and its
corresponding location on the image
Using linear array sensors (as in the case of IKONOS,
QuickBird, IRS, and SPOT satellites), the collinearity
equations should be written for every scanned line on the
image.
The Rational Function Model (RFM) is an empirical
mathematical model that has been developed to
approximate the relationship between the image and the
object spaces.
A number of GCPs are normally used to improve the
accuracy obtained by the RFM

M O D ELLIN G SATELLITE
SEN SO R O R IEN TATIO N
the 3D polynomial model can also be used to

model the relationship between the image and the


object spaces.
The results show that the choice of the polynomial
order depends on the type of terrain, available
number of GCP, and the stability of the satellite
sensor in space.
3D affine model can be performed by limiting the
polynomial model to the first order
3D affine model has high integrity to represent the
relationship between the image and the object
spaces, especially when the model is applied to
data obtained from highly stable satellite sensors

Com parison ofm odels

M O D ELLIN G SATELLITE
SEN SO R O R IEN TATIO N
Rigorous modeling is most accurate

of all because it takes into


consideration the actual physical
process of image capture.
It requires both inner orientation and
exterior orientation parameter
Inner orientation parameters are
generally available through
calibration process.

Interior O rientation
Interior Orientation refers to the sensor elements calibration

and the system behind the image plane


When using satellite sensors such as SPOT, IRS-1C, and
other generic pushbroom sensors use perspective center for
each scan line, the process is referred to as internal sensor
modeling.
In a satellite image the Interior Orientation parameters are:
1.Principal point on the image
2.Focal length of the camera
3.Optics parameters
. The transformation between file coordinates and image
coordinates is constant.

Interior O rientation
For each scan line, a

separate bundle of light


rays is defined, where,
Pk = image point
xk = x value of image

coordinates for scan line k


f = focal length of the
camera
Ok = perspective center
for scan line k, aligned
along the orbit
PPk = principal point for
scan line k
lk = light rays for scan
line, bundled at
perspective center Ok

Exterior O rientation
The exterior orientation describes the location and

orientation of the bundle of rays in the object coordinate


system with the 6 parameters: projection center
coordinates (X0, Y0, Z0) and the rotations around the 3
axis (roll, pitch and yaw).
Exterior orientation comprises position and
attitude.
On-board GPS receivers determine the
satellite ephemeris i.e. camera position as a
function of time.
Star trackers and gyros determine the camera
attitude as a function of time

Exterior O rientation
Exterior orientation parameters are:
1.Perspective center of the center scan line
2.Change of perspective centers along the

orbit
3.Rotation of the center scan line: roll,
pitch and yaw.
4.Change of the angles along the orbit

Triangulation
Satellite block triangulation provides a model for

calculating the spatial relationship between a


satellite sensor and the ground coordinate system
for each line of data
This relationship is expressed as the exterior
orientation
In addition to fitting the bundle of light rays to the
known points, satellite block triangulation also
accounts for the motion of the satellite
once the exterior orientation of the center scan line
is determined, the exterior orientation of any other
scan line is calculated based on the distance of that
scan line from the center and the changes of the
perspective center location and rotation angles

Triangulation
Modified collinearity equations are used to compute the

exterior orientation parameters associated with the


respective scan lines in the satellite scenes
Each scan line has a unique perspective center and
individual rotation angles. When the satellite moves from
one scan line to the next, these parameters change. Due to
the smooth motion of the satellite in orbit, the changes are
small and can be modeled by low order polynomial functions.
Both GCPs and tie points can be used for satellite block
triangulation of a stereo scene.
For triangulating a single scene, only GCPs are used. In this
case, space resection techniques are used to compute the
exterior orientation parameters associated with the satellite.
A minimum of six GCPs is necessary. 10 or more GCPs are
recommended to obtain a good triangulation result.
The effects of the Earths curvature are significant and are
removed during block triangulation procedure.

Triangulation
Ideal

Point
Distributi
on Over
a
Satellite
Scene for
Triangula
tion

O rthorectifi
cation
Orthorectification is the process of reducing

geometric errors inherent within


photography and imagery.
General sources of geometric errors :
camera and sensor orientation
systematic error of the camera /sensor
topographic relief displacement
Earth curvature
Least squares adjustment techniques during
block triangulation minimizes the errors
associated with camera or sensor instability.

O rthorectifi
cation
Additionally, the use of self-calibrating

bundle adjustment (SCBA)techniques


along with Additional Parameter (AP)
modeling accounts for the systematic
errors associated with camera interior
geometry.
The effects of topographic relief
displacement are accounted for by
utilizing a DEM during the
orthorectification procedure.

O rthorectifi
cation
The orthorectification process takes the raw

digital imagery and applies a DEM and


triangulation results to create an orthorectified
image.
Once an orthorectified image is created, each
pixel within the image possesses geometric
fidelity.
Measurements taken off an orthorectified image
represent the corresponding measurements as if
they were taken on the Earths surface
The resulting orthorectified image is known as a
digital orthoimage.

D ATA REPRESEN TATIO N


Digital Elevation
Model
Orthophoto

D igitalElevation M odel
Digital representation of elevations in a region is

commonly referred to as a digital elevation model(DEM).


When the elevations refer to the earths terrain, it is
appropriately referred to as a digital terrain model
(DTM).
When considering elevations of surfaces at or above the
terrain (tree crowns, rooftops, etc.), it can be referred to
as a digital surface model (DSM).

D igitalElevation M odel
Procedure for DEM generation from stereoscopic views can be
summarized as follows (Shin et al., 2003):
1. Feature selection in one of the scenes of a stereo-pair:
Selected features should correspond to an interesting
phenomenon in the scene and/or the object space.
2. Identification of the conjugate feature in the other scene: This
problem is known as the matching/correspondence problem
within the photogrammetric and computer vision
communities.
3. Intersection procedure: Matched points in the stereo-scenes
undergo an intersection procedure to produce the ground
coordinates of corresponding object points. The intersection
process involves the mathematical model relating the scene
and ground coordinates.
4. Point densification: High density elevation data is generated
within the area under consideration through an interpolation
in-between the derived points in the previous step.

U se ofD EM
Common uses of Elevation Models include:
Extracting terrain parameters
Volume Calculations
Modelling water flow or mass movement (for example,
landslides)
Creation of relief maps
Rendering of 3D visualizations
Creation of physical models (including raised-relief maps)
Orthorectification
Reduction (terrain correction) of gravity measurements
Terrain analysis in geomorphology and physical
geography

You might also like