raster vector model
raster vector model
2.1. INTRODUCTION
Spatial data are what drive a GIS. Every functionality that makes a GIS separate from
another analytical environment is rooted in the spatially explicit nature of the data.
Spatial data are often referred to as layers, coverage’s, or layers. We will use the term
layers from this point on, since this is the recognized term used in Arc-GIS. Layers represent, in a
special digital storage format, features on, above, or below the surface of the earth. Depending on
the type of features they represent, and the purpose to which the data will be applied, layers will
be one of two major types.
a) Vector data represent features as discrete points, lines, and polygons.
b) Raster data represent the landscape as a rectangular matrix of square cells.
Depending on the type of problem that needs to be solved, the type of maps that need to be
made, and the data source, either raster or vector, or a combination of the two can be used. Each
data model has strengths and weaknesses in terms of functionality and representation. As you get
more experience with GIS, you will be able to determine which data type to use for a particular
application.
Vector, e.g,
• ArcInfo Coverages
• ArcGIS Shape Files
• CAD (AutoCAD DXF & DWG, or Micro Station DGN files)
• ASCII coordinate data
Raster, e.g,
• ArcInfo Grids
• Images
2.2 GIS
• Digital Elevation Models (DEMs)
• generic raster datasets
Each pointer establishes a parent-child relationship where a parent can have more than one
child but a child can only have one parent. There is no connection between the elements at the
same level. To locate a particular record, you have to start at the top of the tree with a parent
record and trace down the tree to the child.
Advantages
• Easy to understand: The organization of database parallels a family tree understanding
which is quite easy.
Spatial Data Models 2.3
• Accessing records or updating records are very fast since the relationships have been
predefined.
Disadvantages
• Large index files are to be maintained and certain attribute values are repeated many
times which lead to data redundancy and increased storage.
• The rigid structure of this model doesn’t allow alteration of tables, therefore to add a
new relationship entire database is to be redefined.
Advantages
• The many too many relationships are easily implemented in a network data model.
• Data access and flexibility in network model is better than that in hierarchical model.
An application can access an owner record and the member records within a set.
• It enforces data integrity as a user must first define owner record and then the member
records.
• The model eliminated redundancy but at the expense of more complicated
relationships.
2.4 GIS
2.3.3. Relational Data Structure Model
• The relational data model was introduced by Codd in 1970. The relational database
relates or connects data in different files through the use of a common field.
• A flat file structure is used with a relational database model. In this arrangement, data
is stored in different tables made up of rows and columns as shown in figure.2.3.
• The columns of a table are named by attributes. Each row in the table is called a tuple
and represents a basic fact.
• No two rows of the same table may have identical values in all columns.
Advantages
• The manager or administrator does not have to be aware of any data structure or data
pointer. One can easily add, update, delete or create records using simple logic.
Disadvantages
• A few search commands in a relational database require more time to process
compared with other database models.
• For example, let us try to represent a thought: “Hawaii is an island that is a state of
USA” in GIS. In this case, we don’t mind the geographic location with latitude and
longitude in the conventional GIS model. This is not appropriate to use the layers. In
an object-oriented model, we are more careful with spatial relationships for example,
“is a” (the island is a land) and “part of” (the state is a part of the country).
• In addition, Hawaii (state) has Honolulu City and also is in Pacific Region. Figure 2.4
(a) shows “is an” inheritance for the super class of land, while Figure 2.4 (b) shows
the spatial relationships for the object of the state.
An entity is a real-world item or concept that exists on its own. Entities are equivalent to
database tables in a relational database, with each row of the table representing an instance of that
entity.
Spatial Data Models 2.7
An attribute of an entity is a particular property that describes the entity. A relationship is
the association that describes the interaction between entities. Cardinality, in the context of ERD,
is the number of instances of one entity that can, or must, be associated with each instance of
another entity. In general, there may be one-to-one, one-to-many, or many-to-many relationships.
For example, let us consider two real-world entities, an employee and his department. An
employee has attributes such as an employee number, name, department number, etc. Similarly,
department number and name can be defined as attributes of a department. A department can
interact with many employees, but an employee can belong to only one department, hence there
can be a one-to-many relationship, defined between department and employee.
In the actual database, the employee table will have department number as a foreign key,
referencing from department table, to enforce the relationship.
You could use this database to query how many of a particular species of game fish were
examined at a specific park during a data range of interest. This would be non-spatial query
because we are just counting an occurrence at one particular location. We are not using the
coordinates to perform some type of buffer analysis or other spatial analysis to query the data.
The following diagram reflects the two primary spatial data encoding techniques. These
are vector and raster. Image data utilizes techniques very similar to raster data, however typically
lacks the internal formats required for analysis and modeling of the data. Images reflect pictures or
photographs of the landscape.
Vector lines are often referred to as arcs and consist of a string of vertices terminated by a
node. A node is defined as a vertex that starts or ends an arc segment. Point features are defined
by one coordinate pair, a vertex. Polygonal features are defined by a set of closed coordinate pairs.
In vector representation, the storage of the vertices for each feature is important, as well as the
connectivity between features, e.g. the sharing of common vertices where features connect.
Several different vector data models exist, however only two are commonly used in GIS
data storage.
2.10 GIS
The most popular method of retaining spatial relationships among features is to explicitly
record adjacency information in what is known as the topologic data model. Topology is a
mathematical concept that has its basis in the principles of feature adjacency and connectivity.
The topologic data structure is often referred to as an intelligent data structure because
spatial relationships between geographic features are easily derived when using them. Primarily
for this reason the topologic model is the dominant vector data structure currently used in GIS
technology. Many of the complex data analysis functions cannot effectively be undertaken
without a topologic vector data structure. Topology is reviewed in greater detail later on in the
book.
The secondary vector data structure that is common among GIS software is the computer-
aided drafting (CAD) data structure. This structure consists of listing elements, not features,
defined by strings of vertices, to define geographic features, e.g. points, lines, or areas. There is
considerable redundancy with this data model since the boundary segment between two polygons
can be stored twice, once for each feature. The CAD structure emerged from the development of
computer graphics systems without specific considerations of processing geographic features.
Accordingly, since features, e.g. polygons, are self-contained and independent, questions about
the adjacency of features can be difficult to answer. The CAD vector model lacks the definition of
spatial relationships between features that is defined by the topologic data model.
The size of cells in a tessellated data structure is selected on the basis of the data accuracy
and the resolution needed by the user. There is no explicit coding of geographic coordinates
required since that is implicit in the layout of the cells. A raster data structure is in fact a matrix
where any coordinate can be quickly calculated if the origin point is known, and the size of the
grid cells is known. Since grid-cells can be handled as two-dimensional arrays in computer
encoding many analytical operations are easy to program. This makes tessellated data structures a
popular choice for many GIS software. Topology is not a relevant concept with tessellated
Spatial Data Models 2.11
structures since adjacency and connectivity are implicit in the location of a particular cell in the
data matrix.
Several tessellated data structures exist, however only two are commonly used in GIS's.
The most popular cell structure is the regularly spaced matrix or raster structure. This data
structure involves a division of spatial data into regularly spaced cells. Each cell is of the same
shape and size. Squares are most commonly utilized.
Since geographic data is rarely distinguished by regularly spaced shapes, cells must be
classified as to the most common attribute for the cell. The problem of determining the proper
resolution for a particular data layer can be a concern. If one selects too coarse a cell size then data
may be overly generalized. If one selects too fine a cell size then too many cells may be created
resulting in a large data volume, slower processing times, and a more cumbersome data set. As
well, one can imply accuracy greater than that of the original data capture process and this may
result in some erroneous results during analysis.
As well, since most data is captured in a vector format, e.g. digitizing, data must be
converted to the raster data structure. This is called vector-raster conversion. Most GIS software
allows the user to define the raster grid (cell) size for vector-raster conversion. It is imperative that
the original scale, e.g. accuracy, of the data be known prior to conversion. The accuracy of the
data, often referred to as the resolution, should determine the cell size of the output raster map
during conversion.
Most raster based GIS software requires that the raster cell contain only a single discrete
value. Accordingly, a data layer, e.g. forest inventory stands, may be broken down into a series of
raster maps, each representing an attribute type, e.g. a species map, a height map, a density map,
etc. These are often referred to as one attribute maps. This is in contrast to most conventional
vector data models that maintain data as multiple attribute maps, e.g. forest inventory polygons
linked to a database table containing all attributes as columns. This basic distinction of raster data
storage provides the foundation for quantitative analysis techniques. This is often referred to as
raster or map algebra. The use of raster data structures allow for sophisticated mathematical
modelling processes while vector based systems are often constrained by the capabilities and
language of a relational DBMS.
The selection of a particular data model, vector or raster, is dependent on the source and
type of data, as well as the intended use of the data. Certain analytical procedures require raster
data while others are better suited to vector data.
Fig.2.10. Image data is most often used for remotely sensed imagery
such as satellite imagery or digital orthophotos.
Spatial Data Models 2.13
2.5.4. Vector and Raster – Advantages and Disadvantages
There are several advantages and disadvantages for using either the vector or raster data
model to store spatial data. These are summarized below.
Vector Data:
Advantages:
• Data can be represented at its original resolution and form without generalization.
• Graphic output is usually more aesthetically pleasing (traditional cartographic
representation);
• Since most data, e.g. hard copy maps, is in vector form no data conversion is required.
• Accurate geographic location of data is maintained.
• Allows for efficient encoding of topology, and as a result more efficient operations
that require topological information, e.g. proximity, network analysis.
Disadvantages:
• The location of each vertex needs to be stored explicitly. For effective analysis, vector
data must be converted into a topological structure. This is often processing intensive
and usually requires extensive data cleaning. As well, topology is static, and any
updating or editing of the vector data requires re-building of the topology. Algorithms
for manipulative and analysis functions are complex and may be processing intensive.
Often, this inherently limits the functionality for large data sets, e.g. a large number of
features.
• Continuous data, such as elevation data, is not effectively represented in vector form.
Usually substantial data generalization or interpolation is required for these data
layers.
• Spatial analysis and filtering within polygons is impossible
Raster Data
Advantages:
• The geographic location of each cell is implied by its position in the cell matrix.
Accordingly, other than an origin point, e.g. bottom left corner, no geographic
coordinates are stored.
• Due to the nature of the data storage technique data analysis is usually easy to program
and quick to perform.
• The inherent nature of raster maps, e.g. one attribute maps, is ideally suited for
mathematical modeling and quantitative analysis.
• Discrete data, e.g. forestry stands, is accommodated equally well as continuous data,
e.g. elevation data, and facilitates the integrating of the two data types.
• Grid-cell systems are very compatible with raster-based output devices, e.g.
electrostatic plotters, graphic terminals.
2.14 GIS
Disadvantages:
• The location of each vertex needs to be stored explicitly. For effective analysis, vector
data must be converted into a topological structure. This is often processing intensive
and usually requires extensive data cleaning. As well, topology is static, and any
updating or editing of the vector data requires re-building of the topology.
• Algorithms for manipulative and analysis functions are complex and may be
processing intensive. Often, this inherently limits the functionality for large data sets,
e.g. a large number of features.
• Continuous data; such as elevation data, is not effectively represented in vector form.
• Usually substantial data generalization or interpolation is required for these data
layers.
• Spatial analysis and filtering within polygons is impossible.
The huge size of the data is a major problem with raster data. An image consisting of
twenty different land-use classes takes the same storage space as a similar raster map showing the
location of a single forest. To address this problem many data compaction methods have been
developed which are discussed below:
2.6.4. Quadtree
• A raster is divided into a hierarchy of quadrants that are subdivided based on similar
value pixels.
• The division of the raster stops when a quadrant is made entirely from cells of the
same value.
• A quadrant that cannot be subdivided is called a leaf node.
A satellite or remote sensing image is a raster data where each cell has some value and
together these values create a layer. A raster may have a single layer or multiple layers. In a multi-
layer/ multi-band raster each layer is congruent with all other layers, have identical numbers of
rows and columns, and have same locations in the plane. Digital elevation model (DEM) is an
example of a single-band raster dataset each cell of which contains only one value representing
surface elevation.
Spatial Data Models 2.17
2.6.5. A single layer raster data can be represented using
(a) Two colors (binary):
The raster is represented as binary image with cell values as either 0 or 1 appearing black
and white respectively.
Gray-scale:
Typical remote sensing images are recorded in an 8 bit digital system. A grayscale image
is thus represented in 256 shades of gray which range from 0 (black) to 255 (white). However a
human eye can’t make distinction between the 255 different shades. It can only interpret 8 to 16
shades of gray.
A satellite image can have multiple bands, i.e. the scene/details are captured at different
wavelengths (Ultraviolet- visible- infrared portions) of the electromagnetic spectrum. While
creating a map we can choose to display a single band of data or form a color composite using
multiple bands. A combination of any three of the available bands can be used to create RGB
composites. These composites present a greater amount of information as compared to that
provided by a single band raster.
Compression ratio:
• The compression ratio (that is , the size of the compressed file compared to that of the
uncompressed file) of lossy video codec’s is nearly always far superior to that of the
audio and still-image equivalents Wavelet compression, used by raster formats such as
MrSID,JPEG2000,andER Map per’s ECW, takes time to decompress before drawing.
• Compression a series of techniques used for the reduction of space, bandwidth, cost,
transmission, generating time, and the storage of data.
• It’s a computer process using algorithms that reduces the size of electronic documents
so they occupy less digital storage space.
2.7.3. Quadtree
• Typical type of raster model is dividing area into equal-sized rectangular cells .
• However, many cases, variable sized grid cell size used for more compact raster
representation as shown figure.2.13.
• Larger cells used to represent large homogenous areas and smaller cells for finely
details.
• Process involves regularly subdividing a map into four equal sized quadrants.
Quadrant that has more than one class is again subdivided. Then; it is further
subdivided within each quadrant until a square is found to be so homogenous that it is
no longer needed to be divided.
• Then a Quadtree is prepared, resembling an inverted tree with “Root”, i.e., a point
from which all branches expand; Leaf is a lower most point and all other points in the
tree are nodes.
b) Topological features
A topology is a mathematical procedure that describes how features are spatially related
and ensures data quality of the spatial relationships. Topological relationships include following
three basic elements:
2.8.1. Connectivity
Arc node topology defines connectivity - arcs are connected to each other if they share a
common node. This is the basis for many network tracing and path finding operations.
Arcs represent linear features and the borders of area features. Every arc has a from-node
which is the first vertex in the arc and a to-node which is the last vertex. These two nodes define
the direction of the arc. Nodes indicate the endpoints and intersections of arcs. They do not exist
independently and therefore cannot be added or deleted except by adding and deleting arcs.
2.8.2. Contiguity
Polygon topology defines contiguity. The polygons are said to be contiguous if they share
a common arc. Contiguity allows the vector data model to determine adjacency.
Polygon A is outside the boundary of the area covered by polygons B, C and D. It is called
the external or universe polygon, and represents the world outside the study area. The universe
polygon ensures that each arc always has a left and right side defined.
2.8.3. Containment
Geographic features cover distinguishable area on the surface of the earth. An area is
represented by one or more boundaries defining a polygon.
The lake actually has two boundaries, one which defines its outer edge and the other
(island) which defines its inner edge. An island defines the inner boundary of a polygon. The
polygon D is made up of arc 5, 6 and 7. The 0 before the 7 indicates that the arc 7 creates an
island in the polygon.
Polygons are represented as an ordered list of arcs and not in terms of X, Y coordinates.
This is called Polygon-Arc topology. Since arcs define the boundary of polygon, arc coordinates
are stored only once, thereby reducing the amount of data and ensuring no overlap of boundaries
of the adjacent polygons.
Line entities:
Linear features made by tracing two or more XY coordinate pair.
• Simple line: It requires a start and an end point.
• Arc: A set of XY coordinate pairs describing a continuous complex line. The shorter
the line segment and the higher the number of coordinate pairs, the closer the chain
approximates a complex curve.
Simple Polygons:
Enclosed structures formed by joining set of XY coordinate pairs. The structure is simple
but it carries few disadvantages which are mentioned below:
• Lines between adjacent polygons must be digitized and stored twice, improper
digitization give rise to slivers and gaps
• Convey no information about neighbour
• Creating islands is not possible
Because points can be placed irregularly over a surface a TIN can have higher resolution
in areas where surface is highly variable. The model incorporates original sample points providing
a check on the accuracy of the model. The information related to TIN is stored in a file or a
database table. Calculation of elevation, slope, and aspect is easy with TIN but these are less
widely available than raster surface models and more time consuming in term of construction and
processing.
2.26 GIS
The TIN model is a vector data model which is stored using the relational attribute tables.
A TIN dataset contains three basic attribute tables: Arc attribute table that contains length, from
node and to node of all the edges of all the triangles.
• Node attribute table that contains x, y coordinates and z (elevation) of the vertices
• Polygon attribute table that contains the areas of the triangles, the identification
number of the edges and the identifier of the adjacent polygons.
Storing data in this manner eliminated redundancy as all the vertices and edges are stored
only once even if they are used for more than one triangle. As TIN stores topological
relationships, the datasets can be applied to vector based geo-processing such as automatic
contouring, 3D landscape visualization, volumetric design, surface characterization etc.
Data Model
The data model represents a set of guidelines to convert the real world (called entity) to
the digitally and logically represented spatial objects consisting of the attributes and geometry.
The attributes are managed by thematic or semantic structure while the geometry is represented by
geometric-topological structure.
Vector Data Model: [data models] A representation of the world using points, lines, and
polygons (show in the figure 2.21). Vector models are useful for storing data that has discrete
boundaries, such as country borders, land parcels, and streets.
Raster Data Model: [data models] A representation of the world as a surface divided into
a regular grid of cells. Raster models are useful for storing data that varies continuously, as in an
aerial photograph, a satellite image, a surface of chemical concentrations, or an elevation surface.
Spatial Data Models 2.27
Since the dawn of time, maps have been using symbols to represent real-world features. In
GIS terminology, real-world features are called spatial entities.
The cartographer decides how much data needs to be generalized in a map. This depends
on scale and how much detail will be displayed in the map. The decision to choose vector points,
lines or polygons is governed by the cartographer and scale of the map.
2.28 GIS
(1) Points
For Example: At a regional scale, city extents can be displayed as polygons because this
amount of detail can be seen when zoomed in. But at a global scale, cities can be represented
as points because the detail of city boundaries cannot be seen.
Vector data are stored as pairs of XY coordinates (latitude and longitude) represented as a
point. Complementary information like street name or date of construction could accompany it
in a table for its current use.
(2) Lines
Lines usually represent features that are linear in nature. Cartographers can use a different
thickness of line to show size of the feature. For Example, 500 meter Wide River may be
thicker than a 50 meter wide river. They can exist in the real-world such as roads or rivers. Or
they can also be artificial divisions such as regional borders or administrative boundaries.
Points are simply pairs of XY coordinates (latitude and longitude). When you connect
each point or vertex with a line in a particular order, they become a vector line feature.
Networks are line data sets but they are often considered to be different. This is because linear
networks are topologically connected elements. They consist of junctions and turns with
Spatial Data Models 2.29
connectivity. If you were to find an optimal route using a traffic line network, it would follow
one-way streets and turn restrictions to solve an analysis. Networks are just that smart.
(3) Polygons
Examples of polygons are buildings, agricultural fields and discrete administrative areas.
Cartographers use polygons when the map scale is large enough to be represented as polygons.
For example:
Each pixel value in a satellite image has a red, green and blue value. Alternatively, each
value in an elevation map represents a specific height. It could represent anything from
rainfall to land cover.
Raster models are useful for storing data that varies continuously. For example, elevation
surfaces, temperature and lead contamination.
In a discrete raster land cover/use map, you can distinguish each thematic class. Each class
can be discretely defined where it begins and ends. In other words, each land cover cell is
definable and it fills the entire area of the cell.
Discrete data usually consists of integers to represent classes. For example, the value 1
might represent urban areas; the value 2 represents forest and so on.
A continuous raster surface can be derived from a fixed registration point. For example,
digital elevation models use sea level as a registration point. Each cell represents a value above or
below sea level. As another example, aspect cell values have fixed directions such as north, east,
south or west.
Phenomena can gradually vary along a continuous raster from a specific source. In a raster
depicting an oil spill, it can show how the fluid moves from high concentration to low
concentration. At the source of the oil spill, concentration is higher and diffuses outwards with
diminishing values as a function of distance.
In the end, it really comes down to the way in which the cartographer conceptualizes the
feature in their map.
Spatial Data Models 2.31
• Do you want to work with pixels or coordinates? Raster data works with pixels.
Vector data consists of coordinates.
• What is your map scale? Vectors can scale objects up to the size of a billboard. But
you don’t get that type of flexibility with raster data
• Do you have restrictions for file size? Raster file size can result larger in comparison
with vector data sets with the same phenomenon and area.
Disadvantages:
• The location of each vertex needs to be stored explicitly. For effective analysis, vector
data must be converted into a topological structure. This is often processing intensive
and usually requires extensive data cleaning. As well, topology is static, and any
updating or editing of the vector data requires re-building of the topology.
• Algorithms for manipulative and analysis functions are complex and may be
processing intensive. Often, this inherently limits the functionality for large data sets,
e.g. a large number of features.
• Continuous data; such as elevation data, is not effectively represented in vector form.
• Usually substantial data generalization or interpolation is required for these data
layers.
• Spatial analysis and filtering within polygons is impossible.
A TIN is a vector based representation of the physical land surface or sea bottom, made up
of irregularly distributed nodes and lines with three dimensional coordinates (x,y, and z) that are
arranged in a network of non-overlapping triangles. TINs are often derived from the elevation data
of a rasterized digital elevation model (DEM).
Edges:
Every node is joined with its nearest neighbors by edges to form triangles, which satisfy
the Delaunay criterion. Each edge has two nodes, but a node may have two or more edges.
2.34 GIS
Because edges have a node with a z value at each end, it is possible to calculate a slope along the
edge from one node to the other.
TIN:
Advantages - ability to describe the surface at different level of resolution, efficiency in
storing data.
Disadvantages - in many cases require visual inspection and manual control of the
network.
The TIN creates triangles from a set of points called mass points, which always become
nodes. The user is not responsible for selecting; all the nodes are added according to a set of rules.
Mass points can be located anywhere, the more carefully selected, the more accurate the model of
the surface will be. Well-placed mass points occur when there is a major change in the shape of
the surface, for example, at the peak of a mountain, the floor of a valley, or at the edge (top and
bottom) of cliffs. By connecting points on a valley floor or along the edge of a cliff, a linear break
in the surface can be defined. These are called break lines. Break lines can control the shape of the
surface model.
They always form edges of triangles and, generally, cannot be moved. A triangle always
has three and only three straight sides, making their representation rather simple. A triangle is
assigned a unique identifier that defines by its three nodes and its two or three neighboring
triangles.
TIN is a vector-based topological data model that is used to represent terrain data. A TIN
represents the terrain surface as a set of interconnected triangular facets. For each of the three
vertices, the XY (geographic location) and the (elevation) Z values are encoded.
2.12. GRID/LUNR/MAGI
In this model each grid cell is referenced or addressed individually and is associated with
identically positioned grid cells in all other coverage’s, rather than like a vertical column of grid
cells, each dealing with a separate theme. Comparisons between coverage’s are therefore
performed on a single column at a time. Soil attributes in one coverage can be compared with
vegetation attributes in a second coverage. Each soil grid cell in one coverage can be compared
with a vegetation grid cell in the second coverage. The advantage of this data structure is that it
facilitates the multiple coverage analysis for single cells. However, this limits the examination of
spatial relationships between entire groups or themes in different coverage’s.
2.13.1. Standards
Most of the OGC standards depend on a generalized architecture captured in a set of
documents collectively called the Abstract Specification, which describes a basic data model for
representing geographic features. Atop the Abstract Specification members have developed and
continue to develop a growing number of specifications, or standards to serve specific needs for
interoperable location and geospatial technology, including GIS.
The OGC standards baseline comprises more than thirty standards, including:
Although the term "garbage in, garbage out" certainly applies to GIS data, there are other
important data quality issues besides the input data that need to be considered.
Position Accuracy
Position accuracy is the expected deviance in the geographical location of an object in the
data set (e.g. on a map) from its true ground position. Selecting a specified sample of points in a
prescribed manner and comparing the position coordinates with an independent and more accurate
Spatial Data Models 2.39
source of information usually test it. There are two components to position accuracy: the bias and
the precision.
Attribute Accuracy
Attributes may be discrete or continuous variables. A discrete variable can take on only a
finite number of values whereas a continuous variable can take on any number of values.
Categories like land use class, vegetation type, or administrative area are discrete variables. They
are, in effect, ordered categories where the order indicates the hierarchy of the attribute.
Logical Consistency
Logical consistency refers to how well logical relations among data elements are
maintained. It also refers to the fidelity of relationships encoded in the database, they may refer to
the geometric structure of the data model (e.g. topologic consistency) or to the encoded attribute
information e.g. semantic consistency).
(a) Completeness
Completeness refers to the exhaustiveness of the information in terms of spatial and
attribute properties encoded in the database. It may include information regarding feature
selection criteria, definition and mapping rules and the deviations from them. The tests on
spatial completeness may be obtained from topological test used for logical consistency
whereas the test for attribute completeness is done by comparison of a master list of geo-codes
to the codes actually appearing in the database.
There are several aspects to completeness as it pertains to data quality. They are grouped
here into three categories: completeness of coverage, classification and verification.
The completeness of coverage is the proportion of data available for the area of interest.
Example:
Demographic information is usually very time sensitive. It can change significantly
over a year. Land cover will change quickly in an area of rapid urbanization.
(c) Lineage
The lineage of a data set is its history, the source data and processing steps used to
produce it. The source data may include transaction records, field notes etc. Ideally, some
indication of lineage should be included with the data set since the internal documents are
rarely available and usually require considerable expertise to evaluate. Unfortunately, lineage
information most often exists as the personal experience of a few staff members and is not
readily available to most users.
Accessibility refers to the ease of obtaining and using the data. The accessibility of a data
set may be restricted because the data are privately held. Access to government-held information
may be restricted for reasons of national security or to protect citizen rights. Census data are
usually restricted in this way. Even when the right to use restricted data can be obtained, the time
and effort needed to actually receive the information may reduce its overall suitability.
The direct cost of a data set purchased from another organization is usually well known: it
is the price paid for the data. However, when the data are generated within the organization, the
true cost may be unknown. Assessing the true cost of these data is usually difficult because the
services and equipment used in their production support other activities as well.
The indirect costs include all the time and materials used to make use of the data. When
data are purchased from another organization, the indirect costs may actually be more significant
than the direct ones.
It may take longer for staff to handle data with which they are unfamiliar, or the data
may not be compatible with the other data sets to be used.
But during the last many years, GIS data most often have been digitized from several
sources, including hard copy maps, rectified aerial photography and satellite imagery. Hard-copy
maps (e.g. paper, vellum and plastic film) may contain unintended production errors as well as
unavoidable or even intended errors in presentation. The following are "errors" commonly found
in maps.
Indistinct Boundaries
Indistinct boundaries typically include the borders of vegetated areas, soil types, wetlands
and land use areas. In the real world, such features are characterized by gradual change, but
cartographers represent these boundaries with a distinct line. Some compromise is inevitable.
Map Scale
Cartographers and photogrammetrists work to accepted levels of accuracy for a given map
scale as per National Map Accuracy Standards. Locations of map features may disagree with
actual ground locations, although the error likely will fall within specified tolerances. Of course,
the problem is compounded by limitations in linear map measurements-typically about 1/100th of
an inch on a map scale.
Map Symbology
It is impossible to perfectly depict the real world using lines, colors, symbols and patterns.
Cartographers work with certain accepted conventions. As a result, facts and features represented
on maps often must be interpreted or interpolated, which can produce errors. For example, terrain
elevations typically are depicted using topographic contour lines and spot elevations. Elevations
of the ground between the lines and spots must be interpolated. Also, areas symbolized as "forest"
may not depict all open areas among the trees.
A digitizer must accurately discern the centre of a line or point as well as accurately trace
it with a cursor. This task is especially prone to error if the map scale is small and the lines or
symbols are relatively thick or large. The method of digitizing curvilinear lines also affects
accuracy. "Point-mode" digitizing, for example, places sample points at selected locations along a
line to best represent it in a GIS. The process is subject to judgment of the digitizer who selects
the number and placement of data points. "Stream-mode" digitizing collects data points at a pre-
set frequency, usually specified as the distance or time between data points. Every time an
operator strays from an intended line, a point digitized at that moment would be inaccurate. This
method also collects more data points than may be needed to faithfully represent a map feature.
Therefore, post-processing techniques often are used to "weed out" unneeded data points.
Heads-up digitizing often is preferred over table digitizing, because it typically yields
better results more efficiently. Keyed data entry of land parcel data is the most precise method.
Moreover, most errors are fairly obvious, because the source data usually are carefully computed
Spatial Data Models 2.43
and thoroughly checked. Most keyed data entry errors show as obvious mismatches in the parcel
"fabric."
GIS software usually includes functions that detect several types of database errors. These
error-checking routines can find mistakes in data topology, including gaps, overshoots, dangling
lines and unclosed polygons. An operator sets tolerances that the routine uses to search for errors,
and system effectiveness depends on setting correct tolerances. For example, tolerances too small
may pass over unintentional gaps, and tolerances too large may improperly remove short dangling
lines or small polygons that were intentionally digitized.
The phrasing of spatial and attribute queries also may lead to errors. In addition, the use of
Boolean operators can be complicated, and results can be decidedly different, depending on how a
data query is structured or a series of queries are executed. For example, the query, "Find all
structures within the 100 year flood zone," yields a different result than, "Find all structures
touching the 100 year flood zone." The former question will find only those structures entirely
within the flood zone, whereas the latter also will include structures that are partially within the
zone.
Dataset overlay is a powerful and commonly used GIS tool, but it can yield inaccurate
results. To determine areas suitable for a specific type of land development project, one may
overlay several data layers, including natural resources, wetlands, flood zones, land uses, land
ownership and zoning. The result usually will narrow the possible choices down to a few parcels
that would be investigated more carefully to make a final choice. The final result of the analysis
will reflect any errors in the original GIS data. Its accuracy only will be as good as the least
accurate GIS dataset used in the analysis.
It is also common to overlay and merge GIS data to form new layers. In certain
circumstances, this process introduces a new type of error: the polygon "sliver." Slivers often
appear when two GIS datasets with common boundary lines are merged. If the common elements
have been digitized separately, the usual result will be sliver polygons. Most GIS software
products offer routines that can find and fix such errors, but users must be careful in setting search
and correction tolerances.
Many errors can be avoided through proper selection and "scrubbing" of source data
before they are digitized. Data scrubbing includes organizing, reviewing and preparing the source
materials to be digitized. The data should be clean, legible and free of ambiguity. "Owners" of
source data should be consulted as needed to clear up questions that arise.
Data entry procedures should be thoroughly planned, organized and managed to produce
consistent, repeatable results. Nonetheless, a thorough, disciplined quality review and revision
process also is needed to catch and eliminate data entry errors. All production and quality control
Spatial Data Models 2.45
procedures should be documented, and all personnel should be trained in these procedures.
Moreover, the work itself should be documented, including a record of what was done, who did it,
when was it done, who checked it, what errors were found and how they were corrected.
To avoid misusing GIS data and the misapplication of analytical software, GIS analysts
including casual users need proper training. Moreover, GIS data should not be provided without
metadata indicating the source, accuracy and specifics of how the data were entered.