0% found this document useful (0 votes)
71 views31 pages

Spacial Analysis

This document discusses spatial analysis techniques for interpolation and overlay. It describes interpolation as predicting unknown values from a limited number of sample points to create continuous surfaces. Different interpolation methods make assumptions about the data and produce predictions using various calculations. Overlay analysis allows combining several input layers into a single output by applying weights. Suitability modeling is a common application that identifies optimal locations based on assessing multiple factors. The document outlines the general steps of defining the problem, breaking it into submodels, determining significant layers, reclassifying data, weighting layers, combining layers, and analyzing the results.

Uploaded by

Kelik S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views31 pages

Spacial Analysis

This document discusses spatial analysis techniques for interpolation and overlay. It describes interpolation as predicting unknown values from a limited number of sample points to create continuous surfaces. Different interpolation methods make assumptions about the data and produce predictions using various calculations. Overlay analysis allows combining several input layers into a single output by applying weights. Suitability modeling is a common application that identifies optimal locations based on assessing multiple factors. The document outlines the general steps of defining the problem, breaking it into submodels, determining significant layers, reclassifying data, weighting layers, combining layers, and analyzing the results.

Uploaded by

Kelik S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 31

SPACIAL ANALYSIS

- INTERPOLATION -

INTERPOLATION
Interpolation predicts values for cells in a raster from a
limited number of sample data points
It can be used to predict unknown values for any geographic point
data, such as elevation, rainfall, chemical concentrations, noise
levels, and so on
Surface interpolation creates a continuous (or prediction) surface
from sampled point values.
The continuous surface representation of a raster dataset represents
some measure, such as the height, concentration, or magnitude (for
example, elevation, acidity, or noise level). Surface interpolation
make predictions from sample measurements for all locations in an
output raster dataset, whether or not a measurement has been
taken at the location

INTERPOLATION
There are a variety of ways to derive a prediction for each
location; each method is referred to as a model.

With each model, there are different assumptions made of


the data, and certain models are more applicable for specific
datafor example, one model may account for local variation
better than another .

Each model produces predictions using different calculations

INTERPOLATION
The deterministic interpolation methods assign values to locations
based on the surrounding measured values and on specified
mathematical formulas that determine the smoothness of the
resulting surface.
The deterministic methods include IDW (inverse distance
weighting), Natural Neighbor, Trend, and Spline.
The geostatistical methods are based on statistical models that
include autocorrelation (the statistical relationship among the
measured points). Geostatistical techniques not only have the
capability of producing a prediction surface but also provide some
measure of the certainty or accuracy of the predictions.
Kriging is a geostatistical method of interpolation
Topo to Raster and Topo to Raster by File, use an interpolation
method specifically designed for creating continuous surfaces from
contour lines, and the methods also contain properties favorable for
creating surfaces for hydrologic analysis

INTERPOLATION METHODS
IDW {INVERSE DISTANCE WEIGHTED}
The IDW (Inverse Distance Weighted) tool uses a method of
interpolation that estimates cell values by averaging the values of
sample data points in the neighborhood of each processing cell.
NATURAL NEIGHBOUR
Natural Neighbor interpolation finds the closest subset of input
samples to a query point and applies weights to them based on
proportionate areas to interpolate a value
TREND
Trend is a global polynomial interpolation that fits a smooth
surface defined by a mathematical function (a polynomial) to the
input sample points

INTERPOLATION METHODS
SPLINE
The Spline tool uses an interpolation method that estimates
values using a mathematical function that minimizes overall
surface curvature, resulting in a smooth surface that passes
exactly through the input points
c
SPLINE WITH BARRIERS
The Spline with Barriers tool uses a method similar to the
technique used in the Spline tool, with the major difference
being that this tool honors discontinuities encoded in both the
input barriers and the input point data.

INTERPOLATION METHODS
KRIGING
The Spline with Barriers tool uses a method similar to the
technique used in the Spline tool, with the major difference being
that this tool honors discontinuities encoded in both the input
barriers and the input point data.

TOPO TO RASTER
The Topo to Raster and Topo to Raster by File tools use an
interpolation technique specifically designed to create a surface
that more closely represents a natural drainage surface and
better preserves both ridgelines and stream networks from
input contour data.

INTERPOLATION ANALYSIS
WHY INTERPOLATE TO RASTER?
The assumption that makes interpolation a viable option is that
spatially distributed objects are spatially correlated that are close
together tend to have similar characteristics
For instance, if it is raining on one side of the street, we can predict
with a high level of confidence that it is raining on the other side of
the street. You would be less certain if it was raining across town
and less confident still about the state of the weather in the next
county.
The values of points close to sampled points are more likely to be
similar than those that are farther apart. This is the basis of
interpolation. A typical use for point interpolation is to create an
elevation surface from a set of sample measurements.
Geostatistical Analyst also provides and extensive collection of
interpolation methods.

EXAMPLES INTERPOLATION
APPLICATION
Interpolating a rainfall surface

The input here is a point dataset of known rainfall-level values, shown by the illustration on
the left.
The illustration on the right shows a raster interpolated from these points.
The unknown values are predicted with a mathematical formula that uses the values of
nearby known points.

EXAMPLES INTERPOLATION
APPLICATION
Interpolating an elevation surface
A typical use for point interpolation is to create an elevation surface from a set of
sample measurements.
In the following graphic, each symbol in the point layer represents a location where the
elevation has been measured.
By interpolating, the values for each cell between these input points will be predicted.

EXAMPLES INTERPOLATION
APPLICATION
Interpolating an concentration surface
In the example below, the interpolation tools were used to study the
correlation of the ozone concentration on lung disease in California.
The image on the left shows the locations of the ozone monitoring
stations. The image on the right displays the interpolated surface,
providing predictions for each location in California. The surface was
derived using kriging.

SPACIAL ANALYSIS
- OVERLAY -

OVERLAY
Overlay analysis allow you to apply weights to several inputs and
combine them into a single output. The most common application
for Overlay tools is suitability modeling
Overlay analysis in Spatial Analyst is a group of methodologies
applied in optimal site selection or suitability modeling
Suitability models identify the best or most preferred locations for a
specific phenomenon. Types of problems addressed by suitability
analysis include:
Where to site a new housing development
Which sites are better for deer habitat
Where economic growth is most likely to occur
Where the locations are that are most susceptible to mud slides

OVERLAY
Overlay analysis often requires the analysis of many different
factors.

For instance, choosing the site for a new housing


development means assessing such things as land cost,
proximity to existing services, slope, and flood frequency.
This information exists in different rasters with different value
scales: dollars, distances, degrees, and so on.

OVERLAY
The general steps to perform overlay analysis:
1. Define the problem.
2. Break the problem into submodels.
3. Determine significant layers.
4. Reclassify or transform the data within a layer.
5. Weight the input layers.
6. Add or combine the layers.
7. Analyze

OVERLAY
DEFINE THE
PROBLEM
The overall objective must be identified. All aspects of the
remaining steps of the overlay modeling process must
contribute to this overall objective.
The components relating to the objective must be defined..
However, a clear definition of each component and how they
interact must be established.
In the problem definition, specific measures should be
established to identify the success of the outcome from the
model.
For example, when identifying the best location for a ski
resort, the overall goal may be to make money. All factors that
are identified in the model should help the ski area be
profitable.

OVERLAY
BREAK THE PROBLEM INTO SUBMODELS
Most overlay problems are complex, and it is recommended to break them
down into submodels for clarity, to organize the thoughts, and to more
effectively solve the overlay problem.
For example, a suitability model for identifying the best location for a ski
resort can be broken into a series of submodels that all help the ski area be
profitable. The first submodel can be a terrain submodel identifying locations
that have a wide variety of favorable terrain for skiers and snowboarders.
Making sure people can reach the ski area can be captured in an
accessibility submodel.
A cost submodel can identify the locations that would be optimal to build on.
Certain attributes or layers can be in multiple submodels. For example, steep
slopes might be favorable in the terrain submodel but detrimental for the
cost for building submodel.

OVERLAY
DETERMINE SIGNIFICANT LAYERS
The attributes or layers that affect each submodel need to be
identified
Each factor captures and describes a component of the
phenomena the submodel is defining.
Each factor contributes to the goals of the submodel, and
each submodel contributes to the overall goal of the overlay
model.
For certain factors, the layers may need to be created. For
example, it may be more desirable to be closer to a major
road.

OVERLAY
RECLASSIFICATION/TRANSFORMATION
Different number systems cannot be directly combined
effectively
The four main numbering systems are:
Ratio The ratio scale has a reference point, usually
zero, and the numbers within the scale are comparable.
For example, elevation values are ratio numbers, and
an elevation of 50 meters is half as high as 100 meters.
IntervalThe values in an interval scale are relative to
one another; however, there is not a common reference
point. For example, a pH scale is of type interval, where
the higher the value is above the neutral value of 7, the
more alkaline it is, and the lower the value is
below 7, the more acidic it is. However, the values are
not fully comparable. For example, a pH of 2 is not
twice as acidic as a pH of 4.

OVERLAY
RECLASSIFICATION/TRANSFORMATION
OrdinalAn ordinal scale establishes order such as who
came in first, second, and third in a race. Order is
established, but the assigned order values cannot be
directly compared. For example, the person who came in
first was not necessarily twice as fast as the person who
came in second.
NominalThere is no relationship between the
assigned values in the nominal scale. For example, landuse values, which are nominal values, cannot be
compared to one another.
A land use of 8 is probably not twice as much as a land
use of 4.
Common scales can be predetermined, such as a 1 to 9 or a
1 to 10 scale, with the higher value being more favorable, or
the scale can be on a 0 to 1 scale, defining the possibility of
belonging to a specific set.

OVERLAY
WEIGHT THE INPUT LAYERS
Certain factors may be more important to the overall goal
than others. If this is the case, before the factors are
combined, the factors can be weighted based on their
importance.
For example, in the building submodel for siting the ski
resort, the slope criteria may be twice as important to the
cost of construction as the distance from a road.
Before combining the two layers, the slope criteria should be
multiplied twice as much as distance to roads.

OVERLAY
ADD/COMBINE THE LAYERS
In overlay analysis, it is desirable to establish the relationship
of all the input factors together to identify the desirable
locations that meet the goals of the mode
For example, the input layers, once weighted appropriately,
can be added together in an additive weighted overlay
model. In this combination approach, it is assumed that the
more favorable the factors, the more desirable the location
will be.
Other combining approaches can be applied. For example, in
a fuzzy logic overlay analysis, the combination approaches
explore the possibility of membership of a location to multiple
sets.

OVERLAY
ANALYZE
The identified locations should be visited. We need to
validate what we think is there is actually there. Things could
have changed since the data for the model was created
For example, views may be one of the input criteria to the
model; the better the view, the more preferred the location
will be. From the input elevation data, the model identified
the locations with the best views; however, when one of the
favorable sites is visited, it is discovered that a building has
been constructed in front of the location, obstructing the
view.

OVERLAY ANALYSIS
APPROACHES
The three main overlay approaches available in
Spatial Analyst
1. Weighted Overlay
2. Weighted Sum
3. Fuzzy Overlay

OVERLAY ANALYSIS
APPROACHES
WEIGHTED OVERLAY
The Weighted Overlay tool scales the input data on a defined
scale (the default being 1 to 9), weights the input rasters, and
adds them together. The more favorable locations for each
input criterion will be reclassed to the higher values such as
9.
The weights assigned to the input rasters must equal 100
percent. The layers are multiplied by the appropriate
multiplier, and for each cell, the resulting values are added
together.
Weighted Overlay assumes that more favorable factors result
in the higher values in the output raster, therefore identifying
these locations as being the best.

OVERLAY ANALYSIS
APPROACHES
WEIGHTED OVERLAY
In the following example, a location for a new urban park is
being chosen. Three factors will be considered: land use,
population density, and distance to existing parks. The goal
is to find an area of suitable land use, such as vacant land, in
a neighborhood of high population density to provide green
space in crowded areas that are not already served by an
existing park.

OVERLAY ANALYSIS
APPROACHES
WEIGHTED OVERLAY

The input rasters to the weighted overlay are displayed in the


image above. They are (from left to right) land use, population
density, and distance to parks.

OVERLAY ANALYSIS
APPROACHES
WEIGHTED OVERLAY
The weighted overlay model is displayed in the image below

Each value class in each input raster is assigned a new,


reclassified value on an evaluation scale of 1 to 5, where 1
represents the lowest suitability and 5 the highest.

OVERLAY ANALYSIS
APPROACHES
WEIGHTED OVERLAY
Each of the three input rasters is
then weighted. In this weighted
overlay, land use has a 50 percent
influence, population density a 15
percent influence, and distance
from
parkssuitable
a 35 percent
The most
areas influence.
are shown
in red. Orange areas are next,
followed by green. Blue and
purple areas are least suitable,
and white areas are restricted.
Modifying the suitability values or
the influence percentages will
produce different results.

OVERLAY ANALYSIS
APPROACHES
WEIGHTED SUM
The Weighted Sum provides the ability to weight and combine
multiple inputs to create an integrated analysis. It is similar to
the Weighted Overlay in that multiple raster inputs,
representing multiple factors, can be easily combined
incorporating weights or relative importance.
There are two major differences :
The Weighted Sum does not rescale the reclassified values
back to an evaluation scale.
The Weighted Sum allows floating-point and integer
values, whereas the Weighted Overlay tool only accepts
integer rasters as inputs.

OVERLAY ANALYSIS
APPROACHES
FUZZY OVERLAY
Fuzzy Overlay analysis is based on set theory. Set theory is the mathematical
discipline quantifying the membership relationship of phenomena to specific
sets. In Fuzzy Overlay, a set generally corresponds to a class
Fuzzy Overlay loosely follows the general overlay analysis steps discussed
above but differs in the meaning of the reclassed values and the results from
combining the multiple criteria. The first three steps are the samedefine the
problem, break it into submodels, and determine significant layers. But, in
Fuzzy Overlay analysis, the input rasters are not weighted.
In the Add and combine step of the general overlay analysis, Fuzzy Overlay
differs from Weighted Overlay and Weighted Sum. The combining analysis
step in Fuzzy Overlay analysis quantifies each location's possibility of
belonging to specified sets from various input rasters.

You might also like