Learning Geospatial Analysis With Python-Second Edition - Sample Chapter
Learning Geospatial Analysis With Python-Second Edition - Sample Chapter
Second Edition
Geospatial Analysis is used in almost every field you
can think of from medicine, to defense, to farming.
This book will guide you through this exciting and
complex field.
We start by giving you a little background on the field
and a survey of the techniques and technology used.
We then split the field into its component areas: GIS,
remote sensing, elevation data, advanced modeling,
and real-time data. This book will teach you everything
you need to know about Geospatial Analysis, from using
a particular software package or API to using generic
algorithms that can be applied to any problem. This book
focuses on pure Python whenever possible to minimize
compiling platform-dependent binaries, so that you don't
become bogged down in preparing for analysis. This
book will complete your technical library through handy
recipes that will give you a good understanding of a field
that supplements many modern day human endeavors.
$ 49.99 US
31.99 UK
P U B L I S H I N G
Joel Lawhead
Second Edition
Learning Geospatial
Analysis with Python
ee
Sa
m
pl
C o m m u n i t y
E x p e r i e n c e
D i s t i l l e d
Learning Geospatial
Analysis with Python
Second Edition
An effective guide to geographic information system and remote
sensing analysis using Python 3
Joel Lawhead
Professional (PMP), certified GIS Professional (GISP), and the Chief Information
Officer (CIO) of NVision Solutions Inc., an award-winning firm that specializes in
geospatial technology integration and sensor engineering.
Joel began using Python in 1997 and started combining it with geospatial software
development in 2000. He is the author of the first edition of Learning Geospatial
Analysis with Python and QGIS Python Programming Cookbook, both by Packt
Publishing. His Python cookbook recipes were featured in two editions of Python
Cookbook, O'Reilly Media. He is also the developer of the widely-used, open source
Python Shapefile Library (PyShp). He maintains the geospatial technical blog
http://geospatialpython.com/ and the Twitter feed, @SpatialPython, which
discusses the use of the Python programming language in the geospatial industry.
In 2011, Joel reverse-engineered and published the undocumented shapefile spatial
indexing format and assisted fellow geospatial Python developer, Marc Pfister,
in reversing the algorithm used, allowing developers around the world to create
better-integrated and more robust geospatial applications.
Joel serves as the lead architect, project manager, and co-developer for geospatial
applications used by U.S. government agencies, including NASA, FEMA, NOAA,
the U.S. Navy, and many other commercial and non-profit organizations. In 2002,
he received the international Esri Special Achievement in GIS award for his work
on the Real-Time Emergency Action Coordination Tool (REACT), for emergency
management using geospatial analysis.
Preface
The book starts with a background on geospatial analysis, offers a flow of techniques
and technology used, and splits the field into its component specialty areas, such as
Geographic Information Systems (GIS), remote sensing, elevation data, advanced
modeling, and real-time data. The focus of the book is to lay a strong foundation in
using the powerful Python language and framework in order to approach geospatial
analysis effectively. In doing so, we'll focus on using pure Python as well as certain
Python tools and APIs, and using generic algorithms. The readers will be able to
analyze various forms of geospatial data that comes in and learn real-time data
tracking and how to apply this to interesting scenarios.
While many third-party geospatial libraries are used throughout the examples, a
special effort will made by us to use pure Python, with no dependencies, whenever
possible. This focus on pure Python 3 examples is what will set this book apart from
nearly all other information in this field. This book may be the only geospatial book
using Python 3 on the market currently. We will also go through some popular
libraries that weren't in the previous version of the book.
Preface
Chapter 3, The Geospatial Technology Landscape, tells you about the geospatial
technology ecosystem that consists of thousands of software libraries and packages.
This vast array of choices is overwhelming for newcomers to geospatial analysis.
The secret to learning geospatial analysis quickly is understanding the handful of
libraries and packages that really matter. Most other software is derived from these
critical packages. Understanding the hierarchy of geospatial software and how it's
used allows you to quickly comprehend and evaluate any geospatial tool.
Chapter 4, Geospatial Python Toolbox, introduces software and libraries that form the
basis of the book and are used throughout. Python's role in the geospatial industry
is elaborated: the GIS scripting language, mash-up glue language, and full-blown
programming language. Code examples are used to teach data editing concepts, and
many of the basic geospatial concepts in Chapter 1, Learning Geospatial Analysis Using
Python, are also demonstrated in this chapter.
Chapter 5, Python and Geographic Information Systems, teaches you about simple yet
practical Python GIS geospatial products using processes, which can be applied to a
variety of problems.
Chapter 6, Python and Remote Sensing, shows you how to work with remote sensing
geospatial data. Remote sensing includes some of the most complex and leastdocumented geospatial operations. This chapter will build a solid core for you and
demystify remote sensing using Python.
Chapter 7, Python and Elevation Data, demonstrates the most common uses of
elevation data and how to work with its unique properties. Elevation data deserves
a chapter all on its own. Elevation data can be contained in almost any geospatial
format but is used quite differently from other types of geospatial data.
Chapter 8, Advanced Geospatial Python Modeling, uses Python to teach you the true
power of geospatial technology. Geospatial data editing and processing help us
understand the world as it is. The true power of geospatial analysis is modeling.
Geospatial models help us predict the future, narrow a vast field of choices down
to the best options, and visualize concepts that cannot be directly observed in the
natural world.
Chapter 9, Real-Time Data, examines the modern phenomena of geospatial analysis.
A wise geospatial analyst once said, "As soon as a map is created it is obsolete." Until
recently by the time you collected data about the earth, processed it, and created
a geospatial product, the world it represented had already changed. But modern
geospatial data shatter this notion. Data sets are available over the Internet which are
up to the minute or even the second. This data fundamentally changes the way we
perform geospatial analysis.
Preface
Chapter 10, Putting It All Together, combines the skills from the previous chapters step
by step to build a generic corporate system to manage customer support requests
and field support personnel that could be applied to virtually any organization.
[1]
Government and humanitarian agencies faced a vicious race against time to contain
the outbreak. Patients without treatment died in as little as six days after symptoms
appeared. The most critical piece of information was the location of new cases relative
to the existing cases. The challenge they faced required reporting these new cases in
mostly rural areas with limited infrastructure. Knowing where the virus existed in
humans provided the foundation for all of the decisions that response agencies needed
for containment. The location of cases defined the extent of the outbreak. It allowed
governments to prioritize the distribution of containment resources and medical
supplies. It allowed them to trace the disease to the first victim. It ultimately allowed
them to see if they were making progress in slowing the disease.
This map is a relative heat map of the affected countries based on the number of
cases documented and their location:
[2]
Chapter 1
The solution came from a Liberian student using open source software from a
non-profit Kenyan technology start-up called Ushahidi. Ushahidi is the Swahili
word for testimony or witness. A team of developers in Kenya originally developed
the system in 2008 to track reports of violence after the disputed presidential election
there. Kpetermeni Siakor set up the system in Liberia in 2011 following a similarly
controversial election. When the epidemic hit Liberia, Siakor turned Ushahidi into a
disease-monitoring platform.
Siakor created a dispatch team of volunteers who received phone calls from the
public reporting possible Ebola cases. The details were entered into the Ushahidi
database, which was available on a web-based map almost instantly. The Liberian
Ministry of Health and other humanitarian organizations could access the website,
track the spread of the disease, and properly distribute supplies at health centers.
This effort, amplified by the international response, would ultimately contain the
epidemic globally. In 2015, cases are receding as the world monitors West African
cases in anticipation of the last patient recovering. The following screenshot shows
the latest Liberian public Ushahidi map as of April, 2015:
[3]
Relief workers also used the Ushahidi disaster mapping system to respond to the
2010 Haiti earthquake. Maps have always been used in disaster relief; however, the
modern evolution of GPS-enabled mobile phones, web technology, and open source
geospatial software have created a revolution in humanitarian efforts globally.
The Ushahidi API has a Python library that you can find at
https://github.com/ushahidi/ushapy.
Beyond disasters
The application of geospatial modeling to disaster relief is one of the most recent
and visible case studies. However, the use of geospatial analysis has been increasing
steadily over the last 15 years. In 2004, the U.S. Department of Labor declared the
geospatial industry as one of 13 high-growth industries in the United States expected
to create millions of jobs in the coming decades.
Geospatial analysis can be found in almost every industry, including real estate, oil
and gas, agriculture, defense, politics, health, transportation, and oceanography, to
name a few. For a good overview of how geospatial analysis is used in dozens of
different industries, visit http://geospatialrevolution.psu.edu.
[4]
Chapter 1
Over the centuries, the art of cartography and science of land surveying has developed,
but it wasn't until the 1800s that significant advances in geographic analysis emerged.
Deadly cholera outbreaks in Europe between 1830 and 1860 led geographers in Paris
and London to use geographic analysis for epidemiological studies.
[5]
In 1832, Charles Picquet used different half-toned shades of gray to represent deaths
per thousand citizens in the 48 districts of Paris as part of a report on the cholera
outbreak. In 1854, Dr. John Snow expanded on this method by tracking a cholera
outbreak in London as it occurred. By placing a point on a map of the city each time
a fatality was diagnosed, he was able to analyze the clustering of cholera cases. Snow
traced the disease to a single water pump and prevented further cases. The map has
three layers with streets, an X for each pump, and dots for each cholera death:
[6]
Chapter 1
Geospatial analysis wasn't just used for the war on diseases. For centuries, generals
and historians have used maps to understand human warfare. A retired French
engineer named Charles Minard produced some of the most sophisticated infographics
ever drawn between 1850 and 1870. The term infographics is too generic to describe
these drawings because they have strong geographic components. The quality and
detail of these maps make them fantastic examples of geographic information analysis
even by today's standards. Minard released his masterpiece, Carte figurative des pertes
successives en hommes de l'Arme Franaise dans la campagne de Russie 1812-1813, in 1869,
which depicted the decimation of Napoleon's army in the Russian campaign of 1812.
The map shows the size and location of the army over time along with prevailing
weather conditions. The following graphic contains four different series of information
on a single theme. It is a fantastic example of geographic analysis using pen and paper.
The size of the army is represented by the widths of the brown and black swaths at a
ratio of one millimeter for every 10,000 men. The numbers are also written along the
swaths. The brown-colored path shows soldiers who entered Russia, while the black
represents the ones who made it out. The map scale is shown to the right in the center
as one French league (2.75 miles or 4.4 kilometers). The chart at the bottom runs from
right to left and depicts the brutal freezing temperatures experienced by the soldiers on
the return march home from Russia:
[7]
While far more mundane than a war campaign, Minard released another compelling
map cataloguing the number of cattle sent to Paris from around France. Minard used
pie charts of varying sizes in the regions of France to show each area's variety and
volume of cattle that was shipped:
In the early 1900s, mass printing drove the development of the concept of map
layersa key feature of geospatial analysis. Cartographers drew different map
elements (vegetation, roads, and elevation contours) on plates of glass that
could then be stacked and photographed to be printed as a single image. If the
cartographer made a mistake, only one plate of glass had to be changed instead of
the entire map. Later, the development of plastic sheets made it even easier to create,
edit, and store maps in this manner. However, the layering concept for maps as a
benefit to analysis would not come into play until the modern computer age.
[8]
Chapter 1
Tomlinson is often called the father of GIS. After launching the CGIS, he earned his
doctorate from the University of London with his 1974 dissertation, entitled The
application of electronic computing methods and techniques to the storage, compilation, and
assessment of mapped data, which describes GIS and geospatial analysis. Tomlinson
now runs his own global consulting firm, Tomlinson Associates Ltd., and remains an
active participant in the industry. He is often found delivering the keynote addresses
at geospatial conferences.
[9]
CGIS is the starting point of geospatial analysis as defined by this book. However,
this book would not have been written if not for the work of Howard Fisher and the
Harvard Laboratory for Computer Graphics and Spatial Analysis at the Harvard
Graduate School of Design. His work on the SYMAP GIS software that outputs maps
to a line printer, started an era of development at the laboratory, which produced
two other important packages and, as a whole, permanently defined the geospatial
industry. SYMAP led to other packages including GRID and the Odyssey project
from the same lab. GRID was a raster-based GIS system that used cells to represent
geographic features instead of geometry. GRID was written by Carl Steinitz and
David Sinton. The system later became IMGRID. Next came Odyssey. Odyssey was a
team effort led by Nick Chrisman and Denis White. It was a system of programs that
included many advanced geospatial data management features that are typical of
modern geodatabase systems. Harvard attempted to commercialize these packages
with limited success. However, their impact is still seen today. Virtually, every
existing commercial and open source package owes something to these code bases.
Howard Fisher produced a 1967 film using the output from
SYMAP to show the urban expansion of Lansing, Michigan, from
1850 to 1965 by hand coding decades of property information into
the system. The analysis took months but would take only a few
minutes to create now using modern tools and data. You can see the
film at http://youtu.be/xj8DQ7IQ8_o.
There are now dozens of graphical user interface (GUI) geospatial desktop
applications available today from companies including Esri, ERDAS, Intergraph,
and ENVI, to name a few. Esri is the oldest, continuously operating GIS software
company, which started in the late 1960s. In the open source realm, packages
including Quantum GIS (QGIS) and Geographic Resources Analysis Support
System (GRASS) are widely used. Beyond comprehensive desktop software
packages, software libraries for the building of new software exist in the thousands.
[ 10 ]
Chapter 1
Remote sensing
Remote sensing is the collection of information about an object without making
physical contact with that object. In the context of geospatial analysis, the object
is usually the Earth. Remote sensing also includes the processing of the collected
information. The potential of geographic information systems is limited only by the
available geographic data. The cost of land surveying, even using a modern GPS, to
populate a GIS has always been resource-intensive. The advent of remote sensing
not only dramatically reduced this cost of geospatial analysis, but it took the field
in entirely new directions. In addition to powerful reference data for GIS systems,
remote sensing has made possible the automated and semi-automated generation
of GIS data by extracting features from images and geographic data. The eccentric
French photographer, Gaspard-Flix Tournachon, also known as Nadar, took the first
aerial photograph in 1858 from a hot-air balloon over Paris:
The value of a true bird's-eye view of the world was immediately apparent. As early
as 1920, books on aerial photo interpretation began to appear.
When the United States entered the Cold War with the Soviet Union after World
War II, aerial photography to monitor military capability became prolific with the
invention of the American U-2 spy plane. The U-2 spy plane could fly at 70,000 feet,
putting it out of the range of existing anti-aircraft weapons designed to reach only
50,000 feet. The American U-2 flights over Russia ended when the Soviets finally shot
down a U-2 and captured the pilot.
[ 11 ]
However, aerial photography had little impact on modern geospatial analysis. Planes
could only capture small footprints of an area. Photographs were tacked to walls
or examined on light tables but not in the context of other information. Though
extremely useful, aerial photo interpretation was simply another visual perspective.
The game changer came on October 4, 1957, when the Soviet Union launched the
Sputnik 1 satellite. The Soviets had scrapped a much more complex and sophisticated
satellite prototype because of manufacturing difficulties. Once corrected, this prototype
would later become Sputnik 3. Instead, they opted for a simple metal sphere with four
antennae and a simple radio transmitter. Other countries including the United States
were also working on satellites. The satellite initiatives were not entirely a secret.
They were driven by scientific motives as part of the International Geophysical Year
(IGY). Advancement in rocket technology made artificial satellites a natural evolution
for Earth science. However, in nearly every case, each country's defense agency was
also heavily involved. Similar to the Soviets, other countries were struggling with
complex satellite designs packed with scientific instruments. The Soviets' decision to
switch to the simplest possible device for the sole reason of launching a satellite before
the Americans was effective. Sputnik was visible in the sky as it passed over and its
radio pulse could be heard by amateur radio operators. Despite Sputnik's simplicity,
it provided valuable scientific information that could be derived from its orbital
mechanics and radio frequency physics.
The Sputnik program's biggest impact was on the American space program.
America's chief adversary had gained a tremendous advantage in the race to space.
The United States ultimately responded with the Apollo moon landings. However,
before this, the U.S. launched a program that would remain a national secret until
1995. The classified CORONA program resulted in the first pictures from space. The
U.S. and Soviet Union had signed an agreement to end spy plane flights but satellites
were conspicuously absent from the negotiations. The following map shows the
CORONA process. Dashed lines are the satellite flight paths, the longer white tubes
are the satellites, the smaller white cones are the film canisters, and the black blobs
are the control stations that triggered the ejection of the film so that a plane could
catch it in the sky:
[ 12 ]
Chapter 1
The first CORONA satellite was a four-year effort with many setbacks. However,
the program ultimately succeeded. The difficulty of satellite imaging, even today,
is retrieving the images from space. The CORONA satellites used canisters of black
and white film that were ejected from the vehicle once exposed. As the film canister
parachuted to Earth, a U.S. military plane would catch the package in midair. If
the plane missed the canister, it would float for a brief duration in the water before
sinking into the ocean to protect the sensitive information. The U.S. continued to
develop the CORONA satellites until they matched the resolution and photographic
quality of the U-2 spy plane photos. The primary disadvantages of the CORONA
instruments were reusability and timeliness. Once out of film, a satellite could no
longer be of service. Additionally, the film recovery was on a set schedule making
the system unsuitable to monitor real-time situations. The overall success of the
CORONA program, however, paved the way for the next wave of satellites, which
ushered in the modern era of remote sensing.
[ 13 ]
Due to the CORONA program's secret status, its impact on remote sensing was
indirect. Photographs of the Earth taken on manned U.S. space missions inspired the
idea of a civilian-operated remote sensing satellite. The benefits of such a satellite were
clear but the idea was still controversial. Government officials questioned whether
a satellite was as cost-efficient as aerial photography. The military were worried
that the public satellite could endanger the secrecy of the CORONA program. Other
officials worried about the political consequences of imaging other countries without
permission. However, the Department of the Interior (DOI) finally won permission
for NASA to create a satellite to monitor Earth's surface resources.
On July 23, 1972, NASA launched the Earth Resources Technology Satellite (ERTS).
The ERTS was quickly renamed to Landsat-1. The platform contained two sensors.
The first was the Return Beam Vidicon (RBV) sensor, which was essentially a
video camera. It was built by the radio and television giant, Radio Corporation of
America (RCA). The RBV immediately had problems, which included disabling the
satellite's altitude guidance system. The second attempt at a satellite was the highly
experimental Multispectral Scanner (MSS). The MSS performed flawlessly and
produced superior results than the RBV. The MSS captured four separate images at
four different wavelengths of the light reflected from the Earth's surface.
This sensor had several revolutionary capabilities. The first and most important
capability was the first global imaging of the planet scanning every spot on the Earth
every 16 days. The following image from the U.S. National Aeronautics and Space
Administration (NASA) illustrates this flight and collection pattern that is a series
of overlapping swaths as the sensor orbits the Earth capturing tiles of data each time
the sensor images a location on the Earth:
[ 14 ]
Chapter 1
It also recorded light beyond the visible spectrum. While it did capture green and
red light visible to the human eye, it also scanned near-infrared light at two different
wavelengths not visible to the human eye. The images were stored and transmitted
digitally to three different ground stations in Maryland, California, and Alaska. The
multispectral capability and digital format meant that the aerial view provided by
Landsat wasn't just another photograph from the sky. It was beaming down data.
This data could be processed by computers to output derivative information about
the Earth in the same way a GIS provided derivative information about the Earth
by analyzing one geographic feature in the context of another. NASA promoted the
use of Landsat worldwide and made the data available at very affordable prices to
anyone who asked.
[ 15 ]
This global imaging capability led to many scientific breakthroughs including the
discovery of previously unknown geography as late as 1976. For example, using
Landsat imagery, the Government of Canada located a tiny uncharted island
inhabited by polar bears. They named the new landmass Landsat Island.
Landsat 1 was followed by six other missions and turned over to the National
Oceanic and Atmospheric Administration (NOAA) as the responsible agency.
Landsat 6 failed to achieve orbit due to a ruptured manifold, which disabled its
maneuvering engines. During some of these missions, the satellites were managed
by the Earth Observation Satellite (EOSAT) company, now called Space Imaging,
but returned to government management by the Landsat 7 mission. The following
image from NASA is a sample of a Landsat 7 product:
The Landsat Data Continuity Mission (LDCM) was launched on February 13,
2013, and began collecting images on April 27, 2013, as part of its calibration cycle
to become Landsat 8. The LDCM is a joint mission between NASA and the U. S.
Geological Survey (USGS).
[ 16 ]
Chapter 1
Elevation data
A Digital Elevation Model (DEM) is a three-dimensional representation of a planet's
terrain. In the context of this book, this planet is the Earth. The history of digital elevation
models is far less complicated than remotely-sensed imagery but no less significant.
Before computers, representations of elevation data were limited to topographic maps
created through traditional land surveys. Technology existed to create three-dimensional
models from stereoscopic images or physical models from materials such as clay or
wood, but these approaches were not widely used for geography.
The concept of digital elevation models began in 1986 when the French space agency,
Centre national d'tudes spatiales (CNES), launched its SPOT-1 satellite, which
included a stereoscopic radar. This system created the first usable DEM. Several
other U.S. and European satellites followed this model with similar missions.
In February, 2000, the Space Shuttle Endeavour conducted the Shuttle Radar
Topography Mission (SRTM), which collected elevation data over 80% of the
Earth's surface using a special radar antenna configuration that allowed a single
pass. This model was surpassed in 2009 by the joint U.S. and Japanese mission using
the Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER)
sensor aboard NASA's Terra satellite. This system captured 99% of the Earth's
surface but has proven to have minor data issues. As the Space Shuttle's orbit did not
cross the Earth's poles, it did not capture the entire surface. SRTM remains the gold
standard. The following image from the USGS shows a colorized DEM known as a
hillshade. Greener areas are lower elevations while yellow and brown areas are
mid-range to high elevations:
[ 17 ]
Computer-aided drafting
Computer-aided drafting (CAD) is worth mentioning, though it does not directly
relate to geospatial analysis. The history of CAD system development parallels
and intertwines with the history of geospatial analysis. CAD is an engineering
tool used to model two- and three-dimensional objects usually for engineering
and manufacturing. The primary difference between a geospatial model and CAD
model is that a geospatial model is referenced to the Earth, whereas a CAD model
can possibly exist in abstract space. For example, a three-dimensional blueprint of a
building in a CAD system would not have a latitude or longitude, but in a GIS, the
same building model would have a location on the Earth. However, over the years,
CAD systems have taken on many features of GIS systems and are commonly used
for smaller GIS projects. Likewise, many GIS programs can import CAD data that
has been georeferenced. Traditionally, CAD tools were designed primarily for the
engineering of data that was not geospatial.
However, engineers who became involved with geospatial engineering projects,
such as designing a city utility electric system, would use the CAD tools that they
were familiar with in order to create maps. Over time, both the GIS software evolved
to import the geospatial-oriented CAD data produced by engineers, and CAD
tools evolved to support geospatial data creation and better compatibility with GIS
software. AutoCAD by Autodesk and ArcGIS by Esri were the leading commercial
packages to develop this capability and the Geospatial Data Abstraction Library
(GDAL) OGR library developers added CAD support as well.
You want complete control of the underlying algorithms, data, and execution
[ 18 ]
Chapter 1
The geospatial industry is gradually moving away from the traditional workflow
in which teams of analysts use expensive desktop software to produce geospatial
products. Geospatial analysis is being pushed towards automated processes that
reside in the cloud. End user software is moving towards task-specific tools, many of
which are accessed from mobile devices. Knowledge of geospatial concepts and data
as well as the ability to build custom geospatial processes is where the geospatial
work in the near future lies.
[ 19 ]
Imagine that you are looking at a cat. We know some information about the cat,
such as its name, age, color, and size. These features are the properties of the cat.
The cat also exhibits behaviors such as eating, sleeping, jumping, and purring. In
object-oriented programming, objects have properties and behaviors too. You can
model a real-world object such as the cat in our example or something more abstract
such as a bank account.
Most concepts in object-oriented programming are far more abstract than the simple
cat paradigm or even the bank account. However, in geospatial analysis, the objects
that are modeled remain concrete, such as the simple cat analogy, and in many
cases are living, breathing cats. Geospatial analysis allows you to continue with the
simple cat analogy and even visualize it. The following map represents the feral cat
population of Australia using data provided by the Atlas of Living Australia (ALA):
[ 20 ]
Chapter 1
[ 21 ]
Thematic maps
As its name suggests, a thematic map portrays a specific theme. A general reference
map visually represents features as they relate geographically for navigation or
planning. A thematic map goes beyond location to provide the geographic context
for information around a central idea. Usually, a thematic map is designed for a
targeted audience to answer specific questions. The value of thematic maps lies in
what they do not show. A thematic map will use minimal geographic features to
avoid distracting the reader from the theme. Most thematic maps include political
boundaries such as country or state borders but omit navigational features, such as
street names or points of interest beyond major landmarks that orient the reader.
The cholera map by Dr. John Snow earlier in this chapter is a perfect example of a
thematic map. Common uses for thematic maps are visualizing health issues, such as
disease, election results, and environmental phenomena such as rainfall. These maps
are also the most common output of geospatial analysis. The following map from the
United States Census Bureau shows cancer mortality rates by state:
[ 22 ]
Chapter 1
Thematic maps tell a story and are very useful. However, it is important to
remember that while thematic maps are models of reality as any other map, they are
also generalizations of information. Two different analysts using the same source
of information will often come up with very different thematic maps depending
on how they analyze and summarize the data. They may also choose to focus on
different aspects of the dataset. The technical nature of thematic maps often leads
people to treat them as if they are scientific evidence. However, geospatial analysis
is often inconclusive. While the analysis may be based on scientific data, the analyst
does not always follow the rigor of the scientific method. In his classic book, How
to Lie with Maps, Mark Monmonier, University of Chicago Press, demonstrates in great
detail how maps are easily manipulated models of reality, which are commonly
abused. This fact doesn't degrade the value of these tools. The legendary statistician,
George Box, wrote in his 1987 book, Empirical Model-Building and Response Surfaces
that, "Essentially, all models are wrong, but some are useful." Thematic maps have been
used as guides to start (and end) wars, stop deadly diseases in their tracks, win
elections, feed nations, fight poverty, protect endangered species, and rescue those
impacted by disaster. Thematic maps may be the most useful models ever created.
Spatial databases
In its purest form, a database is simply an organized collection of information.
A database management system (DBMS) is an interactive suite of software that
can interact with a database. People often use the word database as a catch-all term
referring to both the DBMS and underlying data structure. Databases typically
contain alphanumeric data and, in some cases, binary large objects or blobs, which
can store binary data such as images. Most databases also allow a relational database
structure in which entries in normalized tables can be referenced to each other in
order to create many-to-one and one-to-many relationships among data.
Spatial databases, also known as geodatabases, use specialized software to extend
a traditional relational database management system (RDBMS) to store and
query data defined in two-dimensional or three-dimensional space. Some systems
also account for a series of data over time. In a spatial database, attributes about
geographic features are stored and queried as traditional relational database
structures. The spatial extensions allow you to query geometries using Structured
Query Language (SQL) in a similar way as traditional database queries. Spatial
queries and attribute queries can also be combined to select results based on both
location and attributes.
[ 23 ]
Spatial indexing
Spatial indexing is a process that organizes the geospatial vector data for faster
retrieval. It is a way of prefiltering the data for common queries or rendering.
Indexing is commonly used in large databases to speed up returns to queries. Spatial
data is no different. Even a moderately-sized geodatabase can contain millions of
points or objects. If you perform a spatial query, every point in the database must
be considered by the system in order to include it or eliminate it in the results.
Spatial indexing groups data in ways that allow large portions of the dataset to be
eliminated from consideration by doing computationally simpler checks before
going into a detailed and slower analysis of the remaining items.
Metadata
Metadata is defined as data about data. Accordingly, geospatial metadata is data
about geospatial datasets that provide traceability for the source and history of
a dataset as well as summary of the technical details. Metadata also provides
long-term preservation of information holdings. Geospatial metadata can be
represented by several possible standards. One of the most prominent standards
is the international standard, ISO 19115-1, which includes hundreds of potential
fields to describe a single geospatial dataset. Additionally, the ISO 19115-2 includes
extensions for geospatial imagery and gridded data. Some example fields include
spatial representation, temporal extent, and lineage. The primary use of metadata is
cataloging datasets. Modern metadata can be ingested by geographic search engines
making it potentially discoverable by other systems automatically. It also lists points
of contact for a dataset if you have questions. Metadata is an important support tool
for geospatial analysts and adds credibility and accessibility to your work. The Open
Geospatial Consortium (OGC) created the Catalog Service for the Web (CSW) to
manage metadata. The pycsw Python library implements the CSW standard. You can
learn more about it at http://pycsw.org.
Map projections
Map projections have entire books devoted to them and can be a challenge for new
analysts. If you take any three-dimensional object and flatten it on a plane, such as
your screen or a sheet of paper, the object is distorted. Many grade school geography
classes demonstrate this concept by having students peel an orange and then attempt
to lay the peel flat on their desk in order to understand the resulting distortion. The
same effect occurs when you take the round shape of the Earth and project it on a
computer screen.
[ 24 ]
Chapter 1
Rendering
The exciting part of geospatial analysis is visualization. As geospatial analysis is a
computer-based process, it is good to be aware of how geographic data appears on a
computer screen.
Geographic data including points, lines, and polygons are stored numerically as
one or more points, which come in (x,y) pairs or (x,y,z) tuples. The x represents the
horizontal axis on a graph. The y represents the vertical axis. The z represents terrain
elevation. In computer graphics, a computer screen is represented by an x and y axis.
A z axis is not used because the computer screen is treated as a two-dimensional
plane by most graphics software APIs. However, as desktop computing power
continues to improve, three-dimensional maps are starting to become more common.
Another important factor is screen coordinates versus world coordinates. Geographic
data is stored in a coordinate system representing a grid overlaid on the Earth, which
is three-dimensional and round. Screen coordinates, also known as pixel coordinates,
represent a grid of pixels on a flat, two-dimensional computer screen. Mapping x
and y world coordinates to pixel coordinates is fairly straightforward and involves a
simple scaling algorithm. However, if a z coordinate exists, then a more complicated
transform must be performed to map coordinates from three-dimensional space to
a two-dimensional plane. These transformations can be computationally costly and
therefore slow if not handled correctly.
In the case of remote sensing data, the challenge is typically the file size. Even a
moderately-sized satellite image that is compressed can be tens, if not hundreds,
of megabytes. Images can be compressed using lossless or lossy methods. Lossless
methods use tricks to reduce the file size without discarding any data. Lossy
compression algorithms reduce the file size by reducing the amount of data in the
image while avoiding a significant change in the appearance of the image. Rendering
an image on the screen can be computationally-intensive. Most remote sensing file
formats allow for the storing of multiple lower-resolution versions of the image
called overviews or pyramidsfor the sole purpose of faster rendering at different
scales. When zoomed out from the image to a scale where you couldn't see the detail
of the full resolution image, a preprocessed, lower-resolution version of the image is
displayed quickly and seamlessly.
[ 26 ]
Chapter 1
Images as data
Raster data is captured digitally as square tiles. This means that the data is stored on
a computer as a numerical array of rows and columns. If the data is multispectral, the
dataset will usually contain multiple arrays of the same size, which are geospatially
referenced together to represent a single area on the Earth. These different arrays are
called bands. Any numerical array can be represented on a computer as an image. In
fact, all computer data is ultimately numbers. It is important in geospatial analysis
to think of images as a numeric array because mathematical formulas are used to
process them.
In remotely sensed images, each pixel represents both space (location on the Earth
of a certain size) and the reflectance captured as light reflected from the Earth at
this location into space. So, each pixel has a ground size and contains a number
representing the intensity. As each pixel is a number, we can perform mathematical
equations on this data to combine data from different bands and highlight specific
classes of objects in the image. If the wavelength value is beyond the visible
spectrum, we can highlight features not visible to the human eye. Substances such
as chlorophyll in plants can be greatly contrasted using a specific formula called
Normalized Difference Vegetation Index (NDVI).
By processing remotely sensed images, we can turn this data into visual information.
Using the NDVI formula, we can answer the question, what is the relative health of
the plants in this image? You can also create new types of digital information, which
can be used as input for computer programs to output other types of information.
[ 27 ]
Data structures
GIS vector data uses coordinates consisting of, at a minimum, an x horizontal value
and a y vertical value to represent a location on the Earth. In many cases, a point may
also contain a z value. Other ancillary values are possible including measurements
or timestamps.
These coordinates are used to form points, lines, and polygons to model real-world
objects. Points can be a geometric feature in and of themselves or they can connect
line segments. Closed areas created by line segments are considered polygons.
Polygons model objects such as buildings, terrain, or political boundaries.
A GIS feature can consist of a single point, line, or polygon or it can consist of more
than one shape. For example, in a GIS polygon dataset containing world country
boundaries, the Philippines, which is made up of 7,107 islands, would be represented
as a single country made up of thousands of polygons.
Vector data typically represents topographic features better than raster data. Vector
data has better accuracy potential and is more precise. However, to collect vector
data on a large scale is also traditionally more costly than raster data.
[ 28 ]
Chapter 1
Two other important terms related to vector data structures are bounding box and
convex hull. The bounding box or minimum bounding box is the smallest possible
square that contains all of the points in a dataset. The following image demonstrates
a bounding box for a collection of points:
The convex hull of a dataset is similar to the bounding box, but instead of a square,
it is the smallest possible polygon that can contain a dataset. The bounding box of a
dataset always contains its convex hull. The following image shows the same point
data as the previous example with the convex hull polygon shown in red:
[ 29 ]
Buffer
A buffer operation can be applied to spatial objects including points, lines, or
polygons. This operation creates a polygon around the object at a specified distance.
Buffer operations are used for proximity analysis, for example, establishing a safety
zone around a dangerous area. In the following image, the black shapes represent
the original geometry while the red outlines represent the larger buffer polygons
generated from the original shape:
Dissolve
A dissolve operation creates a single polygon out of adjacent polygons. A common
use for a dissolve operation is to merge two adjacent properties in a tax database
that have been purchased by a single owner. Dissolves are also used to simplify data
extracted from remote sensing:
[ 30 ]
Chapter 1
Generalize
Objects that have more points than necessary for the geospatial model can be
generalized to reduce the number of points used to represent the shape. This
operation usually requires a few attempts to get the optimal number of points
without compromising the overall shape. It is a data optimization technique to
simplify data for the efficiency of computing or better visualization. This technique
is useful in web-mapping applications. Computer screens have a resolution of 72
dots per inch (dpi). Highly-detailed point data, which would not be visible, can be
reduced so that less bandwidth is used to send a visually equivalent map to the user:
[ 31 ]
Intersection
An intersection operation is used to see if one part of a feature intersects with one or
more features. This operation is for spatial queries in proximity analysis and is often
a follow-on operation to a buffer analysis:
Merge
A merge operation combines two or more non-overlapping shapes in a single
multishape object. Multishape objects mean that the shapes maintain separate
geometries but are treated as a single feature with a single set of attributes by the GIS:
[ 32 ]
Chapter 1
Point in polygon
A fundamental geospatial operation is checking to see whether a point is inside a
polygon. This one operation is the atomic building block of many different types
of spatial queries. If the point is on the boundary of the polygon, it is considered
inside. Very few spatial queries exist that do not rely on this calculation in some way.
However, it can be very slow on a large number of points.
The most common and efficient algorithm to detect if a point is inside a polygon is
called the ray casting algorithm. First, a test is performed to see if the point is on
the polygon boundary. Next, the algorithm draws a line from the point in question
in a single direction. The program counts the number of times the line crosses the
polygon boundary until it reaches the bounding box of the polygon. The bounding
box is the smallest box that can be drawn around the entire polygon. If the number is
odd, the point is inside. If the number of boundary intersections is even, the point
is outside:
[ 33 ]
Union
The union operation is less common but very useful to combine two or more
overlapping polygons in a single shape. It is similar to dissolve, but in this case, the
polygons are overlapping as opposed to being adjacent. Usually, this operation is used
to clean up automatically-generated feature datasets from remote sensing operations:
Join
A join or SQL join is a database operation used to combine two or more tables
of information. Relational databases are designed to avoid storing redundant
information for one-to-many relationships. For example, a U.S. state may contain
many cities. Rather than creating a table for each state containing all of its cities, a
table of states with numeric IDs is created, while a table for all the cities in every
state is created with a state numeric ID. In a GIS, you can also have spatial joins that
are part of the spatial extension software for a database. In spatial joins, combine the
attributes to two features in the same way that you do in a SQL join, but the relation
is based on the spatial proximity of the two features. To follow the previous cities
example, we could add the county name that each city resides in using a spatial
join. The cities layer could be loaded over a county polygon layer whose attributes
contain the county name. The spatial join would determine which city is in which
county and perform a SQL join to add the county name to each city's attribute row.
[ 34 ]
Chapter 1
Polygons must have at least four pointsthe first and last points must be
the same
Different geospatial software packages and libraries handle exceptions to these rules
differently and can lead to confusing errors or software behaviors. The safest route is
to make sure that your polygons obey these rules. There is one more important piece
of information about polygons. A polygon is by definition a closed shape, which
means that the first and last vertices of the polygon are identical. Some geospatial
software will throw an error if you haven't explicitly duplicated the first point as the
last point in the polygon dataset. Other software will automatically close the polygon
without complaining. The data format that you use to store your geospatial data may
also dictate how polygons are defined. This issue is a gray area and so it didn't make
the polygon rules, but knowing this quirk will come in handy someday when you
run into an error that you can't explain easily.
Band math
Band math is multidimensional array mathematics. In array math, arrays are treated
as single units, which are added, subtracted, multiplied, and divided. However, in
an array, the corresponding numbers in each row and column across multiple arrays
are computed simultaneously. These arrays are termed matrices and computations
involving matrices are the focus of linear algebra.
[ 35 ]
Change detection
Change detection is the process of taking two images of the same location at
different times and highlighting the changes. A change can be due to the addition
of something on the ground, such as a new building or the loss of a feature such as
coastal erosion. There are many algorithms to detect changes among images and
also determine qualitative factors such as how long ago the change took place. The
following image from a research project by the U.S. Oak Ridge National Laboratory
(ORNL) shows rainforest deforestation between 1984 and 2000 in the state of
Rondonia, Brazil. Colors are used to show how recently the forest was cut. Green
represents virgin rainforests, white is a forest cut within two years of the end of the
date range, red is within 22 years, and the other colors fall in between as described in
the legend:
[ 36 ]
Chapter 1
Histogram
A histogram is the statistical distribution of values in a dataset. The horizontal
axis represents a unique value in a dataset while the vertical axis represents the
frequency of this unique value in the raster. A histogram is a key operation in most
raster processing. It can be used for everything from enhancing contrast in an image
to serving as a basis for object classification and image comparison. The following
example from NASA shows a histogram for a satellite image that has been classified
into different categories representing the underlying surface features:
Feature extraction
Feature extraction is the manual or automatic digitization of features in an image
to points, lines, or polygons. This process serves as the basis for the vectorization
of images in which a raster is converted to a vector dataset. An example of feature
extraction is extracting a coastline from a satellite image and saving it as a vector
dataset. If this extraction is performed over several years, you could monitor the
erosion or other changes along this coastline.
[ 37 ]
Supervised classification
Objects on the Earth reflect different wavelengths of light depending on the materials
that they are made of. In remote sensing, analysts collect wavelength signatures for
specific types of land cover (for example, concrete) and build a library for a specific
area. A computer can then use this library to automatically locate classes in the
library in a new image of the same area.
Unsupervised classification
In an unsupervised classification, a computer groups pixels with similar reflectance
values in an image without any other reference information other than the histogram
of the image.
[ 38 ]
Chapter 1
The only module used in the following example is the turtle module that provides
a very simple graphics engine based on the Tkinter library included with Python.
If you used the installers for Windows or Mac OS X, the Tkinter library should be
included already. If you compiled Python yourself or are using a distribution from
somewhere besides Python.org, then make sure that you can import the turtle
module by typing the following at a command prompt to run the turtle demo script:
python m turtle
If your Python distribution does not have Tkinter, you can find information on
installing it from the following page. The information is for Python 2.3 but the
process is still the same:
http://tkinter.unpythonic.net/wiki/How_to_install_Tkinter
The official Python wiki page for Tkinter can be found here:
https://wiki.python.org/moin/TkInter
The documentation for Tkinter is in the Python Standard Library documentation that
can be found at https://docs.python.org/2/library/tkinter.html.
If you are new to Python, Dive into Python is a free online book, which covers all
the basics of Python and will bring you up to speed. For more information, refer
to http://www.diveintopython.net.
Building SimpleGIS
The code is divided into two different sections. The first is the data model section
and the second is the map renderer that draws this data. For the data model, we
will use simple Python lists. A Python list is a native data type, which serves as a
container for other Python objects in a specified order. Python lists can contain other
lists and are great for simple data structures. They also map well to more complex
structures or even databases if you decide you want to develop your script further.
[ 39 ]
The second portion of the code will render the map using the Python turtle graphics
engine. We will have only one function in the GIS that converts the world coordinates,
in this case, longitude and latitude, into pixel coordinates. All graphics engines have
an origin point of (0,0) that is usually in the top-left or lower-left corner of the canvas.
Turtle graphics are designed to teach programming visually. The turtle graphics
canvas uses an origin of (0,0) in the center, similar to a graphing calculator. The
following image illustrates the type of Cartesian graph that the turtle module uses. In
the following graph, some points are plotted in both positive and negative space:
This also means that the turtle graphics engine can have negative pixel coordinates,
which is uncommon for graphics canvases. However, for this example, the turtle
module is the quickest and simplest way to render our map.
Step by step
You can run this program interactively in the Python interpreter or you can save
the complete program as a script and run it. The Python interpreter is an incredibly
powerful way to learn new concepts because it gives you real-time feedback on
errors or unexpected program behavior. You can easily recover from these issues
and try something else until you get the results that you want:
1. In Python, you usually import modules at the beginning of the script so we'll
import the turtle module first. We'll use Python's import as feature to assign
the module the name t to save space and time when typing turtle commands:
import turtle as t
[ 40 ]
Chapter 1
2. Next, we'll set up the data model starting with some simple variables that
allow us to access list indexes by name instead of numbers to make the code
easier to follow. Python lists index the contained objects starting with the
number 0. So, if we want to access the first item in a list called myList, we
would reference it as follows:
myList[0]
3. To make our code easier to read, we can also use a variable name assigned to
commonly used indexes:
firstItem = 0
myList[firstItem]
4. Now, we'll set up the data for Colorado as a list with name, polygon points,
and population. Note that the coordinates are a list within a list:
state = ["COLORADO", [[-109, 37],[-109, 41],[-102, 41],[-102,
37]], 5187582]
5. The cities will be stored as nested lists. Each city's location consists of a single
point as a longitude and latitude pair. These entries will complete our GIS
data model. We'll start with an empty list called cities and then append the
data to this list for each city:
cities = []
cities.append(["DENVER",[-104.98, 39.74], 634265])
cities.append(["BOULDER",[-105.27, 40.02], 98889])
cities.append(["DURANGO",[-107.88,37.28], 17069])
[ 41 ]
6. We will now render our GIS data as a map by first defining a map size.
The width and height can be anything that you want depending on your
screen resolution:
map_width = 400
map_height = 300
7. In order to scale the map to the graphics canvas, we must first determine
the bounding box of the largest layer, which is the state. We'll set the map's
bounding box to a global scale and reduce it to the size of the state. To do so,
we'll loop through the longitude and latitude of each point and compare it
with the current minimum and maximum x and y values. If it is larger than
the current maximum or smaller than the current minimum, we'll make this
value the new maximum or minimum, respectively:
minx = 180
maxx = -180
miny = 90
maxy = -90
forx,y in state[POINTS]:
if x < minx: minx = x
elif x > maxx: maxx = x
if y < miny: miny = y
elif y > maxy: maxy = y
8. The second step to scaling is to calculate a ratio between the actual state and
the tiny canvas that we will render it on. This ratio is used for coordinate to
pixel conversion. We get the size along the x and y axes of the state and then
we divide the map width and height by these numbers to get our scaling ratio:
dist_x = maxx - minx
dist_y = maxy - miny
x_ratio = map_width / dist_x
y_ratio = map_height / dist_y
Chapter 1
lat = point[1]
x = map_width - ((maxx - lon) * x_ratio)
y = map_height - ((maxy - lat) * y_ratio)
# Python turtle graphics start in the
# middle of the screen
# so we must offset the points so they are centered
x = x - (map_width/2)
y = y - (map_height/2)
return [x,y]
10. Now for the exciting part! We're ready to render our GIS as a thematic map.
The turtle module uses the concept of a cursor called a pen. Moving the
cursor around the canvas is exactly the same as moving a pen around a piece
of paper. The cursor will draw a line when you move it. So, you'll notice that
throughout the code, we use the t.up() and t.down() commands to pick
the pen up when we want to move to a new location and put it down when
we're ready to draw. We have some important steps in this section. As the
border of Colorado is a polygon, we must draw a line between the last point
and first point to close the polygon. We can also leave out the closing step
and just add a duplicate point to the Colorado dataset. Once we draw the
state, we'll use the write() method to label the polygon:
t.up()
first_pixel = None
for point in state[POINTS]:
pixel = convert(point)
if not first_pixel:
first_pixel = pixel
t.goto(pixel)
t.down()
t.goto(first_pixel)
t.up()
t.goto([0,0])
t.write(state[NAME], align="center", font=("Arial",16,"bold"))
[ 43 ]
11. If we were to run the code at this point, we would see a simplified map of the
state of Colorado, as shown in the following screenshot:
[ 44 ]
Chapter 1
12. Now, we'll render the cities as point locations and label them with their
names and population. As the cities are a group of features in a list, we'll
loop through them to render them. Instead of drawing lines by moving the
pen around, we'll use the turtle dot() method to plot a small circle at the
pixel coordinate returned by our SimpleGISconvert() function. We'll then
label the dot with the city's name and add the population. You'll notice that
we must convert the population number to a string in order to use it in the
turtle write() method. To do so, we use Python's built-in function, str():
#for city in cities:
pixel = convert(city[POINTS])
t.up()
t.goto(pixel)
# Place a point for the city
t.dot(10)
# Label the city
t.write(city[NAME] + ", Pop.: " + str(city[POP]), align="left")
t.up()
13. Now we will perform one last operation to prove that we have created a real
GIS. We will perform an attribute query on our data to determine which city
has the largest population. Then, we'll perform a spatial query to see which
city lies the furthest west. Finally, we'll print the answers to our questions on
our thematic map page safely out of the range of the map.
14. As our query engine, we'll use Python's built-in min() and max() functions.
These functions take a list as an argument and return the minimum and
maximum values of this list. These functions have a special feature called
a key argument that allows you to sort complex objects. As we are dealing
with nested lists in our data model, we'll take advantage of the key argument
in these functions. The key argument accepts a function that temporarily
alters the list for evaluation before a final value is returned. In this case, we
want to isolate the population values for comparison and then the points.
We could write a whole new function to return the specified value, but we
can use Python's lambda keyword instead. The lambda keyword defines an
anonymous function that is used inline. Other Python functions can be used
inline, for example, the string function, str(), but they are not anonymous.
This temporary function will isolate our value of interest.
15. So our first question is, which city has the largest population?
biggest_city = max(cities, key=lambda city:city[POP])
t.goto(0,-200)
t.write("The biggest city is: " +
[ 45 ]
biggest_city[NAME])
16. The next question is, which city lies the furthest west?
western_city = min(cities, key=lambda city:city[POINTS])
t.goto(0,-220)
t.write("The western-most city is: " + western_city[NAME])
17. In the preceding query, we use Python's built in min() function to select
the smallest longitude value and this works because we represented our
city locations as longitude and latitude pairs. It is possible to use different
representations for points including possible representations where this code
would need modification to work correctly. However, for our SimpleGIS, we
are using a common point representation to make it as intuitive as possible.
These last two commands are just for clean up purposes. First, we hide the cursor.
Then, we call the turtle done() method, which will keep the turtle graphics window
with our map open until we choose to close it using the close handle at the top of
the window.
t.pen(shown=False)
t.done()
Whether you followed along using the Python interpreter or you ran the complete
program as a script, you should see the following map rendered in real time:
[ 46 ]
Chapter 1
Congratulations! You have followed in the footsteps of Paleolithic hunters, the father
of GIS Dr. Roger Tomlinson, geospatial pioneer Howard Fisher, and game-changing
humanitarian programmers to create a functional, extensible, and technically
complete geographic information system. It took less than 60 lines of pure Python
code! You will be hard-pressed to find a programming language that can create a
complete GIS using only its core libraries in such a finite amount of readable code
as Python. Even if you did, it is highly unlikely that the language would survive the
geospatial Python journey that you'll take through the rest of this book.
As you can see, there is lots of room for expansion of SimpleGIS. Here are some
other ways that you might expand this simple tool using the reference material for
Tkinter and Python linked at the beginning of this section:
Create an overview map in the top-right corner with a U.S. border outline
and Colorado's location in the U.S.
Make a list of states and cities and add more states and cities
The possibilities are endless. SimpleGIS can also be used as a way to quickly test and
visualize geospatial algorithms that you come across. If you want to add more data
layers, you can create more lists, but these lists would become difficult to manage. In
this case, you can use another Python module included in the standard distribution.
The SQLite module provides a SQL-like database in Python that can be saved to disk
or run in memory.
Summary
Well done! You are now a geospatial analyst. In this chapter, you learned the state
of the art of geospatial analysis through the story of Ushahidi and the Ebola virus.
You learned the history of geospatial analysis and the technologies that support it.
You became familiar with foundational GIS and remote sensing concepts that will
serve you through the rest of the book. You actually built a working GIS that can be
expanded to do whatever you can imagine! In the next chapter, we'll tackle the data
formats that you'll encounter as geospatial analysts. Geospatial analysts spend far
more time dealing with data than actually performing analysis. Understanding the
data that you're working with is essential to working efficiently and having fun.
[ 47 ]
Get more information Learning Geospatial Analysis with Python Second Edition
www.PacktPub.com
Stay Connected: