Ob Spy Tutorial
Ob Spy Tutorial
Release 0.9.2
1 Introduction to ObsPy 3
1.1 Python Introduction for Seismologists . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 UTCDateTime . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Reading Seismograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Waveform Plotting Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Retrieving Data from Data Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6 Filtering Seismograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7 Downsampling Seismograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.8 Merging Seismograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.9 Beamforming - FK Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.10 Seismogram Envelopes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.11 Plotting Spectrograms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.12 Trigger/Picker Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.13 Poles and Zeros, Frequency Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.14 Seismometer Correction/Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
1.15 Clone an Existing Dataless SEED File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
1.16 Export Seismograms to MATLAB . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.17 Export Seismograms to ASCII . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
1.18 Anything to MiniSEED . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
1.19 Beachball Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
1.20 Basemap Plot with Beachballs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
1.21 Interfacing R from Python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.22 Coordinate Conversions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
1.23 Hierarchical Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.24 Visualizing Probabilistic Power Spectral Densities . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
1.25 Array Response Function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
1.26 Continuous Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
1.27 Time Frequency Misfit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
1.28 Visualize Data Availability of Local Waveform Archive . . . . . . . . . . . . . . . . . . . . . . . . 74
1.29 Travel Time Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
1.30 Cross Correlation Pick Correction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2 Advanced Exercise 81
2.1 Advanced Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Index 93
i
ii
ObsPy Tutorial, Release 0.9.2
This tutorial does not attempt to be comprehensive and cover every single feature. Instead, it introduces many of
ObsPy’s most noteworthy features, and will give you a good idea of the library’s flavor and style.
CONTENTS 1
ObsPy Tutorial, Release 0.9.2
2 CONTENTS
CHAPTER
ONE
INTRODUCTION TO OBSPY
Here we want to give a small, incomplete introduction to the Python programming language, with links to useful
packages and further resources. The key features are explained via the following Python script:
1 #!/usr/bin/env python
2 import glob
3 from obspy.core import read
4
Note: The length of all loops in Python is determined by the indentation level. Do not mix spaces
and tabs in your program code for indentation, this produces bugs that are not easy to identify.
Line 6 Uses the read() function from the obspy.core module to read in the seismogram to a
Stream object named st.
Line 7 Assigns the first Trace object of the list-like Stream object to the variable tr.
Line 8-9 A Python counterpart for the well-known C function sprintf is the % operator acting on a
format string. Here we print the header attributes station and starttime as well as the return
value of the methods mean() and std() acting on the data sub-object of the Trace (which are
of type numpy.ndarray).
Line 10 Prints content of variable msg to the screen.
3
ObsPy Tutorial, Release 0.9.2
As Python is an interpreter language, we recommend to use the IPython shell for rapid development and trying things
out. It supports tab completion, history expansion and various other features. E.g. type help(glob.glob) or
glob.glob? to see the help of the glob() function (the module must be imported beforehand).
Further Resources
1.2 UTCDateTime
All absolute time values within ObsPy are consistently handled with the UTCDateTime class. It is based on a high
precision POSIX timestamp and not the Python datetime class because precision was an issue.
1.2.1 Initialization
In most cases there is no need to worry about timezones, but they are supported:
>>> UTCDateTime("2012-09-07T12:15:00+02:00")
UTCDateTime(2012, 9, 7, 10, 15)
1.2.4 Exercises
• Calculate the number of hours passed since your birth. Optional: Include the correct time zone. The current
date and time can be obtained with
>>> UTCDateTime()
• Get a list of 10 UTCDateTime objects, starting yesterday at 10:00 with a spacing of 90 minutes.
• The first session starts at 09:00 and lasts for 3 hours and 15 minutes. Assuming we want to have the coffee break
1234 seconds and 5 microseconds before it ends. At what time is the coffee break?
• Assume you had your last cup of coffee yesterday at breakfast. How many minutes do you have to survive with
that cup of coffee?
Seismograms of various formats (e.g. SAC, MiniSEED, GSE2, SEISAN, Q, etc.) can be imported into a Stream
object using the read() function.
Streams are list-like objects which contain multiple Trace objects, i.e. gap-less continuous time series and related
header/meta information.
Each Trace object has a attribute called data pointing to a NumPy ndarray of the actual time series and the
attribute stats which contains all meta information in a dictionary-like Stats object. Both attributes starttime
and endtime of the Stats object are UTCDateTime objects.
The following example demonstrates how a single GSE2-formatted seismogram file is read into a ObsPy Stream
object. There exists only one Trace in the given seismogram:
>>> from obspy.core import read
>>> st = read(’http://examples.obspy.org/RJOB_061005_072159.ehz.new’)
>>> print st
1 Trace(s) in Stream:
Seismogram meta data, data describing the actual waveform data, are accessed via the stats keyword on each
Trace:
>>> print tr.stats
network:
station: RJOB
location:
channel: Z
starttime: 2005-10-06T07:21:59.849998Z
endtime: 2005-10-06T07:24:59.844998Z
sampling_rate: 200.0
delta: 0.005
npts: 36000
calib: 0.0948999971151
_format: GSE2
gse2: AttribDict({’instype’: ’ ’, ’datatype’: ’CM6’, ’hang’: -1.0, ’auxid’: ’RJOB’,
>>> tr.stats.station
’RJOB’
>>> tr.stats.gse2.datatype
’CM6’
The actual waveform data may be retrieved via the data keyword on each Trace:
>>> tr.data
array([-38, 12, -4, ..., -14, -3, -9])
>>> tr.data[0:3]
array([-38, 12, -4])
>>> len(tr)
36000
Stream objects offer a plot() method for fast preview of the waveform (requires the obspy.imaging module):
>>> st.plot()
Read the files as shown at the Reading Seismograms page. We will use two different ObsPy Stream objects
throughout this tutorial. The first one, singlechannel, just contains one continuous Trace and the other one,
threechannel, contains three channels of a seismograph.
2005-10-06T07:21:59Z - 2005-10-06T07:24:59Z
.RJOB..Z
989
659
329
0
-329
-659
-989
07:21:59 07:22:59 07:23:59 07:24:59
Using the plot() method of the Stream objects will show the plot. The default size of the plots is 800x250 pixel.
Use the size attribute to adjust it to your needs.
>>> singlechannel.plot()
2009-02-19T00:00:00Z - 2009-02-19T23:59:59Z
DK.COP..BHZ
2348
1565
782
0
-783
-1566
-2349
00:00:00 08:00:00 15:59:59 23:59:59
This example shows the options to adjust the color of the graph, the number of ticks shown, their format and rotation
and how to set the start- and endtime of the plot. Please see the documentation of method plot() for more details
on all parameters.
>>> dt = singlechannel[0].stats.starttime
>>> singlechannel.plot(color=’red’, number_of_ticks=7,
... tick_rotation=5, tick_format=’%I:%M %p’,
... starttime=dt + 60*60, endtime=dt + 60*60 + 120)
2009-02-19T01:00:00Z - 2009-02-19T01:02:00Z
DK.COP..BHZ
498
331
164
-3
-170
-337
-504
01:00 AM 01:00 AM 01:00 AM 01:01 AM 01:01 AM 01:01 AM 01:02 AM
Plots may be saved into the file system by the outfile parameter. The format is determined automatically from the
filename. Supported file formats depend on your matplotlib backend. Most backends support png, pdf, ps, eps and
svg.
>>> singlechannel.plot(outfile=’singlechannel.png’)
If the Stream object contains more than one Trace, each Trace will be plotted in a subplot. The start- and endtime
of each trace will be the same and the range on the y-axis will also be identical on each trace. Each additional subplot
will add 250 pixel to the height of the resulting plot. The size attribute is used in the following example to change
the overall size of the plot.
>>> threechannels.plot(size=(800, 600))
A day plot of a Trace object may be plotted by setting the type parameter to ’dayplot’:
>>> singlechannel.plot(type=’dayplot’)
Event information can be included in the plot as well (experimental feature, syntax might change):
>>> from obspy import read
>>> st = read("/tmp/GR.BFO..LHZ.2012.108")
>>> st.filter("lowpass", freq=0.1, corners=2)
2009-02-19T00:00:00Z - 2009-02-19T23:59:59Z
DK.COP..BHE
2383
1588
794
0
-795
-1589
-2384
00:00:00 08:00:00 15:59:59 23:59:59
DK.COP..BHN
2382
1588
793
0
-795
-1590
-2384
00:00:00 08:00:00 15:59:59 23:59:59
DK.COP..BHZ
2383
1588
794
0
-795
-1589
-2384
00:00:00 08:00:00 15:59:59 23:59:59
07:00:00
08:00:00
09:00:00
10:00:00
11:00:00
12:00:00
13:00:00
14:00:00
15:00:00
16:00:00
17:00:00
18:00:00
19:00:00
20:00:00
21:00:00
22:00:00
23:00:00
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
time in minutes
01:00:00
02:00:00
03:00:00
08:00:00
09:00:00
10:00:00
11:00:00
0 15 30 45 60
time in minutes
A record section can be plotted from a Stream object by setting parameter type to ’section’:
>>> stream.plot(type=’section’)
To plot a record section the ObsPy header trace.stats.distance (Offset) must be de-
fined in meters. Or a geographical location trace.stats.coordinates.latitude &
trace.stats.coordinates.longitude must be defined if the section is plotted in great circle dis-
tances (dist_degree=True) along with parameter ev_coord. For further information please see plot()
Various options are available to change the appearance of the waveform plot. Please see plot() method for all
possible options.
20
40
Time [s]
60
80
In the following example uses the obspy.arclink module in order to retrieve waveforms data as well as poles and
zeros from a remote server via the ArcLink protocol. The retrieved poles and zeros are then used to correct for the
instrument response and to simulate a 1Hz instrument with damping 0.707.
Note: The default client needs to open port 18002 to the host webdc.eu via TCP/IP in order to download the requested
data. Please make sure that no firewall is blocking access to this server/port combination.
Note: The user keyword in the following example is used for identification with the ArcLink server as well as for
usage statistics within the data center, so please provide a meaningful user id such as your email address.
import numpy as np
import matplotlib.pyplot as plt
from obspy.core import UTCDateTime
from obspy.arclink import Client
from obspy.signal import cornFreq2Paz, seisSim
# 1Hz instrument
one_hertz = cornFreq2Paz(1.0)
# Correct for frequency response of the instrument
res = seisSim(st[0].data.astype(’float32’),
st[0].stats.sampling_rate,
paz,
inst_sim=one_hertz)
# Correct for overall sensitivity
res = res / paz[’sensitivity’]
0
500
1000
15000 5 10 15 20 25 30
3 1e 16
1Hz CornerFrequency
2
1
0
1
2
30 5 10 15 20 25 30
Time [s]
The following script shows how to filter a seismogram. The example uses a zero-phase-shift low-pass filter with a
corner frequency of 1 Hz using 2 corners. This is done in two runs forward and backward, so we end up with 4 corners
de facto.
The available filters are:
• bandpass
• bandstop
• lowpass
• highpass
import numpy as np
import matplotlib.pyplot as plt
from obspy.core import read
# There is only one trace in the Stream object, let’s work on that trace...
tr = st[0]
2005-10-06T07:21:59.850000Z
15000
10000
5000
Raw Data
0
5000
10000
150000 20 40 60 80 100 120 140 160 180
600
400
Lowpassed Data
200
0
200
400
600
8000 20 40 60 80 100 120 140 160 180
Time [s]
The following script shows how to downsample a seismogram. Currently, a simple integer decimation is sup-
ported. If not explicitely disabled, a low-pass filter is applied prior to decimation in order to prevent aliasing. For
comparison, the non-decimated but filtered data is plotted as well. Applied processing steps are documented in
trace.stats.processing of every single Trace. Note the shift that is introduced because by default the ap-
plied filters are not of zero-phase type. This can be avoided by manually applying a zero-phase filter and deactivating
automatic filtering during downsampling (no_filter=True).
import numpy as np
import matplotlib.pyplot as plt
from obspy.core import read
# There is only one trace in the Stream object, let’s work on that trace...
tr = st[0]
# For comparison also only filter the original data (same filter options as in
# automatically applied filtering during downsampling, corner frequency
# 0.4 * new sampling rate)
tr_filt = tr.copy()
tr_filt.filter(’lowpass’, freq=0.4 * tr.stats.sampling_rate / 4.0)
The following example shows how to merge and plot three seismograms with overlaps, the longest one is taken to be
the right one. Please also refer to the documentation of the merge() method.
from obspy.core import read
import matplotlib.pyplot as plt
import numpy as np
2005-10-06T07:21:59.850000Z
15000
Raw
Lowpassed
10000 Lowpassed/Downsampled
5000
5000
10000
15000
82.0 82.2 82.4 82.6 82.8 83.0 83.2 83.4
Time [s]
# sort
st.sort([’starttime’])
# start time in plot equals 0
dt = st[0].stats.starttime.timestamp
The following code shows how to do an FK Analysis with ObsPy. The data are from the blasting of the AGFA
skyscraper in Munich. We execute sonic() using the following settings:
• The slowness grid is set to corner values of -3.0 to 3.0 s/km with a step fraction of sl_s = 0.03.
• The window length is 1.0 s, using a step fraction of 0.05 s.
• The data is bandpass filtered, using corners at 1.0 and 8.0 Hz, prewhitening is disabled.
• semb_thres and vel_thres are set to infinitesimally small numbers and must not be changed.
• The timestamp will be written in ’mlabdays’, which can be read directly by our plotting routine.
• stime and etime have to be given in the UTCDateTime format.
The output will be stored in out.
The second half shows how to plot the output. We use the output out produced by sonic(), which are numpy
ndarrays containing timestamp, relative power, absolute power, backazimuth, slowness. The colorbar corresponds
to relative power.
from obspy.core import read, UTCDateTime, AttribDict
from obspy.signal import cornFreq2Paz
from obspy.signal.array_analysis import array_processing
import matplotlib.pyplot as plt
# Load data
st = read("http://examples.obspy.org/agfa.mseed")
30000
20000
10000
0
10000
20000
30000
300000 1000 2000 3000 4000 5000
20000
10000
0
10000
20000
30000
300000 1000 2000 3000 4000 5000
20000
10000
0
10000
20000
30000
300000 1000 2000 3000 4000 5000
20000
10000
0
10000
20000
300000 1000 2000 3000 4000 5000
st[1].stats.paz = AttribDict({
’poles’: [(-0.03736 - 0.03617j), (-0.03736 + 0.03617j)],
’zeros’: [0j, 0j],
’sensitivity’: 205479446.68601453,
’gain’: 1.0})
st[1].stats.coordinates = AttribDict({
’latitude’: 48.108192,
’elevation’: 0.450000,
’longitude’: 11.583120})
st[2].stats.paz = AttribDict({
’poles’: [(-0.03736 - 0.03617j), (-0.03736 + 0.03617j)],
’zeros’: [0j, 0j],
’sensitivity’: 250000000.0,
’gain’: 1.0})
st[2].stats.coordinates = AttribDict({
’latitude’: 48.108692,
’elevation’: 0.450000,
’longitude’: 11.583414})
st[3].stats.paz = AttribDict({
’poles’: [(-4.39823 + 4.48709j), (-4.39823 - 4.48709j)],
’zeros’: [0j, 0j],
’sensitivity’: 222222228.10910088,
’gain’: 1.0})
st[3].stats.coordinates = AttribDict({
’latitude’: 48.108456,
’elevation’: 0.450000,
’longitude’: 11.583049})
st[4].stats.paz = AttribDict({
’poles’: [(-4.39823 + 4.48709j), (-4.39823 - 4.48709j), (-2.105 + 0j)],
’zeros’: [0j, 0j, 0j],
’sensitivity’: 222222228.10910088,
’gain’: 1.0})
st[4].stats.coordinates = AttribDict({
’latitude’: 48.108730,
’elevation’: 0.450000,
’longitude’: 11.583157})
# Execute array_processing
kwargs = dict(
# Plot
labels = [’rel.power’, ’abs.power’, ’baz’, ’slow’]
fig = plt.figure()
for i, lab in enumerate(labels):
ax = fig.add_subplot(4, 1, i + 1)
ax.scatter(out[:, 0], out[:, i + 1], c=out[:, 1], alpha=0.6,
edgecolors=’none’)
ax.set_ylabel(lab)
ax.set_xlim(out[0, 0], out[-1, 0])
ax.set_ylim(out[:, i + 1].min(), out[:, i + 1].max())
fig.autofmt_xdate()
fig.subplots_adjust(top=0.95, right=0.95, bottom=0.2, hspace=0)
plt.show()
Another representation would be a polar plot, which sums the relative power in gridded bins, each defined by backaz-
imuth and slowness of the analyzed signal part. The backazimuth is counted clockwise from north, the slowness limits
can be set by hand.
from obspy.core import read, UTCDateTime, AttribDict
from obspy.signal import cornFreq2Paz
from obspy.signal.array_analysis import array_processing
# Load data
st = read("http://examples.obspy.org/agfa.mseed")
st[1].stats.paz = AttribDict({
’poles’: [(-0.03736 - 0.03617j), (-0.03736 + 0.03617j)],
’zeros’: [0j, 0j],
’sensitivity’: 205479446.68601453,
’gain’: 1.0})
st[1].stats.coordinates = AttribDict({
’latitude’: 48.108192,
0.9
rel.power
0.8
0.7
0.6
0.5
0.0025 +7.330884619e5
0.0020
abs.power
0.0015
0.0010
0.0005
150 +7.330884619e5
100
50
0
baz
50
100
150
2.5 +7.330884619e5
2.0
1.5
slow
1.0
0.5
010 0 015 020 025 0 30 035 040
0.00 0. 0 0 . 00 0.00 0.00 0.00 0 . 0 0
+7.330884619e5
’elevation’: 0.450000,
’longitude’: 11.583120})
st[2].stats.paz = AttribDict({
’poles’: [(-0.03736 - 0.03617j), (-0.03736 + 0.03617j)],
’zeros’: [0j, 0j],
’sensitivity’: 250000000.0,
’gain’: 1.0})
st[2].stats.coordinates = AttribDict({
’latitude’: 48.108692,
’elevation’: 0.450000,
’longitude’: 11.583414})
st[3].stats.paz = AttribDict({
’poles’: [(-4.39823 + 4.48709j), (-4.39823 - 4.48709j)],
’zeros’: [0j, 0j],
’sensitivity’: 222222228.10910088,
’gain’: 1.0})
st[3].stats.coordinates = AttribDict({
’latitude’: 48.108456,
’elevation’: 0.450000,
’longitude’: 11.583049})
st[4].stats.paz = AttribDict({
’poles’: [(-4.39823 + 4.48709j), (-4.39823 - 4.48709j), (-2.105 + 0j)],
’zeros’: [0j, 0j, 0j],
’sensitivity’: 222222228.10910088,
’gain’: 1.0})
st[4].stats.coordinates = AttribDict({
’latitude’: 48.108730,
’elevation’: 0.450000,
’longitude’: 11.583157})
# Execute sonic
kwargs = dict(
# slowness grid: X min, X max, Y min, Y max, Slow Step
sll_x=-3.0, slm_x=3.0, sll_y=-3.0, slm_y=3.0, sl_s=0.03,
# sliding window properties
win_len=1.0, win_frac=0.05,
# frequency properties
frqlow=1.0, frqhigh=8.0, prewhiten=0,
# restrict output
semb_thres=-1e9, vel_thres=-1e9, timestamp=’mlabday’,
stime=UTCDateTime("20080217110515"), etime=UTCDateTime("20080217110545")
)
out = array_processing(st, **kwargs)
# Plot
from matplotlib.colorbar import ColorbarBase
from matplotlib.colors import Normalize
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import numpy as np
cmap = cm.hot_r
# make output human readable, adjust backazimuth to values between 0 and 360
t, rel_power, abs_power, baz, slow = out.T
baz[baz < 0.0] += 360
# transform to radian
baz_edges = np.radians(baz_edges)
dh = abs(sl_edges[1] - sl_edges[0])
dw = abs(baz_edges[1] - baz_edges[0])
plt.show()
The following script shows how to filter a seismogram and plot it together with its envelope.
This example uses a zero-phase-shift bandpass to filter the data with corner frequencies 1 and 3 Hz, using 2 corners
(two runs due to zero-phase option, thus 4 corners overall). Then we calculate the envelope and plot it together with
the Trace. Data can be found here.
import numpy as np
import matplotlib.pyplot as plt
N
3.0
2.5
2.0 22.5
1.5 20.0
1.0 17.5
0.5 15.0
12.5
W E
10.0
7.5
5.0
2.5
0.0
st = read("http://examples.obspy.org/RJOB_061005_072159.ehz.new")
data = st[0].data
npts = st[0].stats.npts
samprate = st[0].stats.sampling_rate
6000 2005-10-06T07:21:59.850000Z
4000
Filtered Data w/ Envelope
2000
2000
4000
600080 82 84 86 88 90
Time [s]
The following lines of code demonstrate how to make a spectrogram plot of an Obspy Stream object.
Lots of options can be customized, see spectrogram() for more details. For example, the colormap of the plot can
easily be adjusted by importing a predefined colormap from matplotlib.cm, nice overviews of available matplotlib
colormaps are given at:
• http://www.astro.lsa.umich.edu/~msshin/science/code/matplotlib_cm/
• http://www.scipy.org/Cookbook/Matplotlib/Show_colormaps
from obspy.core import read
st = read("http://examples.obspy.org/RJOB_061005_072159.ehz.new")
st.spectrogram(log=True, title=’BW.RJOB ’ + str(st[0].stats.starttime))
102
BW.RJOB 2005-10-06T07:21:59.850000Z
101
Frequency [Hz]
100
10-1
This is a small tutorial that started as a practical for the UNESCO short course on triggering. Test data used in this
tutorial can be downloaded here: trigger_data.zip.
The triggers are implemented as described in [Withers1998]. Information on finding the right trigger parameters for
STA/LTA type triggers can be found in [Trnkoczy2012].
See Also:
Please note the convenience method of ObsPy’s Stream.trigger and Trace.trigger objects for triggering.
The data files are read into an ObsPy Trace object using the read() function.
>>> from obspy.core import read
>>> st = read("http://examples.obspy.org/ev0_6.a01.gse2")
>>> st = st.select(component="Z")
>>> tr = st[0]
The data format is automatically detected. Important in this tutorial are the Trace attributes:
tr.data contains the data as numpy.ndarray
tr.stats contains a dict-like class of header entries
tr.stats.sampling_rate the sampling rate
tr.stats.npts sample count of data
As an example, the header of the data file is printed and the data are plotted like this:
>>> print tr.stats
network:
station: EV0_6
location:
channel: EHZ
starttime: 1970-01-01T01:00:00.000000Z
endtime: 1970-01-01T01:00:59.995000Z
sampling_rate: 200.0
delta: 0.005
npts: 12000
calib: 1.0
_format: GSE2
gse2: AttribDict({’instype’: ’ ’, ’datatype’: ’CM6’, ’hang’: 0.0, ’auxid’: ’ ’, ’
Using the plot() method of the Trace objects will show the plot.
>>> tr.plot(type="relative")
.EV0_6..EHZ
759
506
253
0
-252
-505
-758
0.00 20.00 40.00 59.99
After loading the data, we are able to pass the waveform data to the following trigger routines defined in
obspy.signal.trigger:
recSTALTA(a, nsta, nlta) Recursive STA/LTA.
carlSTATrig(a, nsta, nlta, ratio, Computes the carlSTATrig characteristic function.
quiet)
classicSTALTA(a, nsta, nlta) Computes the standard STA/LTA from a given input array
a. The length of
delayedSTALTA(a, nsta, nlta) Delayed STA/LTA.
zDetect(a, nsta) Z-detector.
pkBaer(reltrc, samp_int, tdownmax, Wrapper for P-picker routine by M. Baer, Schweizer
tupevent, ...) Erdbebendienst.
arPick(a, b, c, samp_rate, f1, f2, Return corresponding picks of the AR picker
lta_p, ...)
obspy.signal.trigger.recSTALTA
obspy.signal.trigger.carlSTATrig
obspy.signal.trigger.classicSTALTA
obspy.signal.trigger.delayedSTALTA
obspy.signal.trigger.zDetect
zDetect(a, nsta)
Z-detector.
Parameters nsta – Window length in Samples.
See Also:
[Withers1998], p. 99
obspy.signal.trigger.pkBaer
See Also:
[Baer1987]
obspy.signal.trigger.arPick
arPick(a, b, c, samp_rate, f1, f2, lta_p, sta_p, lta_s, sta_s, m_p, m_s, l_p, l_s, s_pick=True)
Return corresponding picks of the AR picker
Parameters
• a – Z signal of numpy.ndarray float32 point data
• b – N signal of numpy.ndarray float32 point data
• c – E signal of numpy.ndarray float32 point data
• samp_rate – no of samples per second
• f1 – frequency of lower Bandpass window
• f2 – frequency of upper Bandpass window
• lta_p – length of LTA for parrival in seconds
• sta_p – length of STA for parrival in seconds
• lta_s – length of LTA for sarrival in seconds
• sta_s – length of STA for sarrival in seconds
• m_p – number of AR coefficients for parrival
• m_s – number of AR coefficients for sarrival
• l_p – length of variance window for parrival in seconds
• l_s – length of variance window for sarrival in seconds
• s_pick – if true pick also S phase, elso only P
Returns (ptime, stime) parrival and sarrival
Help for each function is available HTML formatted or in the usual Python manner:
For all the examples, the commands to read in the data and to load the modules are the following:
>>> from obspy.core import read
>>> from obspy.signal.trigger import plotTrigger
>>> trace = read("http://examples.obspy.org/ev0_6.a01.gse2")[0]
>>> df = trace.stats.sampling_rate
Z-Detect
Carl-Sta-Trig
.EV0_6..EHZ
1000
Trigger On
500 Trigger Off
0
500
10000 10 20 30 40 50 60
2.0
1.5
1.0
0.5
0.00 10 20 30 40 50 60
Time after 1970-01-01T01:00:00 [s]
.EV0_6..EHZ
1000
Trigger On
500 Trigger Off
0
500
10000 10 20 30 40 50 60
2.5
2.0
1.5
1.0
0.5
0.0
0.5
1.00 10 20 30 40 50 60
Time after 1970-01-01T01:00:00 [s]
.EV0_6..EHZ
1000
Trigger On
500 Trigger Off
0
500
10000 10 20 30 40 50 60
2.0
1.5
1.0
0.5
0.00 10 20 30 40 50 60
Time after 1970-01-01T01:00:00 [s]
.EV0_6..EHZ
1000
Trigger On
500 Trigger Off
0
500
10000 10 20 30 40 50 60
200
150
100
50
0
50
1000 10 20 30 40 50 60
Time after 1970-01-01T01:00:00 [s]
.EV0_6..EHZ
1000
Trigger On
500 Trigger Off
0
500
10000 10 20 30 40 50 60
60
50
40
30
20
10
00 10 20 30 40 50 60
Time after 1970-01-01T01:00:00 [s]
In this example we perform a coincidence trigger on a local scale network of 4 stations. For the single station triggers
a recursive STA/LTA is used. The waveform data span about four minutes and include four local events. Two are
easily recognizable (Ml 1-2), the other two can only be detected with well adjusted trigger settings (Ml <= 0).
First we assemble a Stream object with all waveform data, the data used in the example is available from our web
server:
>>> from obspy.core import Stream, read
>>> st = Stream()
>>> files = ["BW.UH1..SHZ.D.2010.147.cut.slist.gz",
... "BW.UH2..SHZ.D.2010.147.cut.slist.gz",
... "BW.UH3..SHZ.D.2010.147.cut.slist.gz",
... "BW.UH4..SHZ.D.2010.147.cut.slist.gz"]
>>> for filename in files:
... st += read("http://examples.obspy.org/" + filename)
After applying a bandpass filter we run the coincidence triggering on all data. In the example a recursive STA/LTA is
used. The trigger parameters are set to 0.5 and 10 second time windows, respectively. The on-threshold is set to 3.5,
the off-threshold to 1. In this example every station gets a weight of 1 and the coincidence sum threshold is set to 3.
For more complex network setups the weighting for every station/channel can be customized. We want to keep our
original data so we work with a copy of the original stream:
>>> st.filter(’bandpass’, freqmin=10, freqmax=20) # optional prefiltering
>>> from obspy.signal import coincidenceTrigger
>>> st2 = st.copy()
>>> trig = coincidenceTrigger("recstalta", 3.5, 1, st2, 3, sta=0.5, lta=10)
With these settings the coincidence trigger reports three events. For each (possible) event the start time and duration
is provided. Furthermore, a list of station names and trace IDs is provided, ordered by the time the stations have
triggered, which can give a first rough idea of the possible event location. We can request additional information by
specifying details=True:
>>> st2 = st.copy()
>>> trig = coincidenceTrigger("recstalta", 3.5, 1, st2, 3, sta=0.5, lta=10,
... details=True)
For clarity, we only display information on the first item in the results here:
>>> pprint(trig[0])
{’cft_peak_wmean’: 19.561900329259956,
’cft_peaks’: [19.535644192544272,
19.872432918501264,
19.622171410201297,
19.217352795792998],
’cft_std_wmean’: 5.4565629691954713,
’cft_stds’: [5.292458320417178,
5.6565387957966404,
5.7582248973698507,
5.1190298631982163],
’coincidence_sum’: 4.0,
’duration’: 4.5299999713897705,
’stations’: [’UH3’, ’UH2’, ’UH1’, ’UH4’],
’time’: UTCDateTime(2010, 5, 27, 16, 24, 33, 190000),
’trace_ids’: [’BW.UH3..SHZ’, ’BW.UH2..SHZ’, ’BW.UH1..SHZ’, ’BW.UH4..SHZ’]}
Here, some additional information on the peak values and standard deviations of the characteristic functions of the
single station triggers is provided. Also, for both a weighted mean is calculated. These values can help to distinguish
certain from questionable network triggers.
For more information on all possible options see the documentation page for coincidenceTrigger().
This example is an extension of the common network coincidence trigger. Waveforms with already known event(s)
can be provided to check waveform similarity of single-station triggers. If the corresponding similarity threshold is
exceeded the event trigger is included in the result list even if the coincidence sum does not exceed the specified
minimum coincidence sum. Using this approach, events can be detected that have good recordings on one station
with very similar waveforms but for some reason are not detected on enough other stations (e.g. temporary station
outages or local high noise levels etc.). An arbitrary number of template waveforms can be provided for any station.
Computation time might get significantly higher due to the necessary cross correlations. In the example we use two
three-component event templates on top of a common network trigger on vertical components only.
>>> from obspy.core import Stream, read
>>> st = Stream()
>>> files = ["BW.UH1..SHZ.D.2010.147.cut.slist.gz",
... "BW.UH2..SHZ.D.2010.147.cut.slist.gz",
... "BW.UH3..SHZ.D.2010.147.cut.slist.gz",
... "BW.UH3..SHN.D.2010.147.cut.slist.gz",
... "BW.UH3..SHE.D.2010.147.cut.slist.gz",
... "BW.UH4..SHZ.D.2010.147.cut.slist.gz"]
>>> for filename in files:
... st += read("http://examples.obspy.org/" + filename)
>>> st.filter(’bandpass’, freqmin=10, freqmax=20) # optional prefiltering
Here we set up a dictionary with template events for one single station. The specified times are exact P wave onsets,
the event duration (including S wave) is about 2.5 seconds. On station UH3 we use two template events with three-
component data, on station UH1 we use one template event with only vertical component data.
>>> times = ["2010-05-27T16:24:33.095000", "2010-05-27T16:27:30.370000"]
>>> event_templates = {"UH3": []}
>>> for t in times:
... t = UTCDateTime(t)
... st_ = st.select(station="UH3").slice(t, t + 2.5)
... event_templates["UH3"].append(st_)
>>> t = UTCDateTime("2010-05-27T16:27:30.574999")
>>> st_ = st.select(station="UH1").slice(t, t + 2.5)
>>> event_templates["UH1"] = [st_]
The triggering step, including providing of similarity threshold and event template waveforms. Note that the coinci-
dence sum is set to 4 and we manually specify to only use vertical components with equal station coincidence values
of 1.
>>> from obspy.signal import coincidenceTrigger
>>> st2 = st.copy()
>>> trace_ids = {"BW.UH1..SHZ": 1,
... "BW.UH2..SHZ": 1,
... "BW.UH3..SHZ": 1,
... "BW.UH4..SHZ": 1}
>>> similarity_thresholds = {"UH1": 0.8, "UH3": 0.7}
>>> trig = coincidenceTrigger("classicstalta", 5, 1, st2, 4, sta=0.5,
... lta=10, trace_ids=trace_ids,
... event_templates=event_templates,
... similarity_threshold=similarity_thresholds)
The results now include two event triggers, that do not reach the specified minimum coincidence threshold but that
have a similarity value that exceeds the specified similarity threshold when compared to at least one of the provided
event template waveforms. Note the values of 1.0 when checking the event triggers where we extracted the event
templates for this example.
>>> from pprint import pprint
>>> pprint(trig)
[{’coincidence_sum’: 4.0,
’duration’: 4.1100001335144043,
’similarity’: {’UH1’: 0.9414944738498271, ’UH3’: 1.0},
’stations’: [’UH3’, ’UH2’, ’UH1’, ’UH4’],
’time’: UTCDateTime(2010, 5, 27, 16, 24, 33, 210000),
’trace_ids’: [’BW.UH3..SHZ’, ’BW.UH2..SHZ’, ’BW.UH1..SHZ’, ’BW.UH4..SHZ’]},
{’coincidence_sum’: 3.0,
’duration’: 1.9900000095367432,
’similarity’: {’UH1’: 0.65228204570577764, ’UH3’: 0.72679293429214198},
’stations’: [’UH3’, ’UH1’, ’UH2’],
’time’: UTCDateTime(2010, 5, 27, 16, 25, 26, 710000),
’trace_ids’: [’BW.UH3..SHZ’, ’BW.UH1..SHZ’, ’BW.UH2..SHZ’]},
{’coincidence_sum’: 3.0,
’duration’: 1.9200000762939453,
’similarity’: {’UH1’: 0.89404458774338103, ’UH3’: 0.74581409371425222},
’stations’: [’UH2’, ’UH1’, ’UH3’],
’time’: UTCDateTime(2010, 5, 27, 16, 27, 2, 260000),
’trace_ids’: [’BW.UH2..SHZ’, ’BW.UH1..SHZ’, ’BW.UH3..SHZ’]},
{’coincidence_sum’: 4.0,
’duration’: 4.0299999713897705,
’similarity’: {’UH1’: 1.0, ’UH3’: 1.0},
’stations’: [’UH3’, ’UH2’, ’UH1’, ’UH4’],
’time’: UTCDateTime(2010, 5, 27, 16, 27, 30, 510000),
’trace_ids’: [’BW.UH3..SHZ’, ’BW.UH2..SHZ’, ’BW.UH1..SHZ’, ’BW.UH4..SHZ’]}]
For more information on all possible options see the documentation page for coincidenceTrigger().
Baer Picker
This yields the output 34.47 EPU3, which means that a P pick was set at 34.47s with Phase information EPU3.
AR Picker
This gives the output 30.6350002289 and 31.2800006866, meaning that a P pick at 30.64s and an S pick at 31.28s
were identified.
A more complicated example, where the data are retrieved via ArcLink and results are plotted step by step, is shown
here:
from obspy.core import UTCDateTime
from obspy.arclink import Client
from obspy.signal.trigger import recSTALTA, triggerOnset
import matplotlib.pyplot as plt
# For convenience
tr = st[0] # only one trace in mseed volume
df = tr.stats.sampling_rate
30000
20000
10000
0
10000
20000
300000 2000 4000 6000 8000 10000
3.5
3.0
2.5
2.0
1.5
1.0
0.5
0.00 2000 4000 6000 8000 10000
The following lines show how to calculate and visualize the frequency response of a LE-3D/1s seismometer with
sampling interval 0.005s and 16384 points of fft. Two things have to be taken into account for the phase (actually for
the imaginary part of the response):
• the fft that is used is defined as exp(-i*phi), but this minus sign is missing for the visualization, so we have to
add it again
• we want the phase to go from 0 to 2*pi, instead of the output from atan2 that goes from -pi to pi
import numpy as np
import matplotlib.pyplot as plt
from obspy.signal import pazToFreqResp
plt.figure()
plt.subplot(121)
plt.loglog(f, abs(h))
plt.xlabel(’Frequency [Hz]’)
plt.ylabel(’Amplitude’)
plt.subplot(122)
#take negative of imaginary part
phase = np.unwrap(np.arctan2(-h.imag, h.real))
plt.semilogx(f, phase)
plt.xlabel(’Frequency [Hz]’)
plt.ylabel(’Phase [radian]’)
# title, centered above both subplots
plt.suptitle(’Frequency Response of LE-3D/1s Seismometer’)
# make more room in between subplots for the ylabel of right plot
plt.subplots_adjust(wspace=0.3)
plt.show()
The following script shows how to simulate a 1Hz seismometer from a STS-2 seismometer with the given poles
and zeros. Poles, zeros, gain (A0 normalization factor) and sensitivity (overall sensitivity) are specified as keys of a
dictionary.
from obspy.core import read
from obspy.signal import cornFreq2Paz
paz_sts2 = {
’poles’: [-0.037004 + 0.037016j, -0.037004 - 0.037016j, -251.33 + 0j,
- 131.04 - 467.29j, -131.04 + 467.29j],
’zeros’: [0j, 0j],
’gain’: 60077000.0,
10-1 6
5
10-2
Phase [radian]
4
Amplitude
10-3
3
10-4
2
10-5 1
10-6 -2 0
10 10-1 100 101 102 10-2 10-1 100 101 102
Frequency [Hz] Frequency [Hz]
’sensitivity’: 2516778400.0}
paz_1hz = cornFreq2Paz(1.0, damp=0.707) # 1Hz instrument
paz_1hz[’sensitivity’] = 1.0
st = read()
# make a copy to keep our original data
st_orig = st.copy()
2009-08-24T00:20:03Z - 2009-08-24T00:20:32Z
BW.RJOB..EHZ
5.4e-07
3.6e-07
1.8e-07
-7.7e-09
-1.9e-07
-3.8e-07
-5.6e-07
00:20:03 00:20:12 00:20:22 00:20:32
BW.RJOB..EHN
5.4e-07
3.6e-07
1.8e-07
-7.2e-09
-1.9e-07
-3.7e-07
-5.6e-07
00:20:03 00:20:12 00:20:22 00:20:32
BW.RJOB..EHE
5.5e-07
3.6e-07
1.8e-07
-4.2e-09
-1.9e-07
-3.7e-07
-5.6e-07
00:20:03 00:20:12 00:20:22 00:20:32
For more customized plotting we could also work with matplotlib manually from here:
import numpy as np
import matplotlib.pyplot as plt
tr = st[0]
tr_orig = st_orig[0]
t = np.arange(tr.stats.npts) / tr.stats.sampling_rate
plt.subplot(211)
plt.plot(t, tr_orig.data, ’k’)
plt.ylabel(’STS-2 [counts]’)
plt.subplot(212)
plt.plot(t, tr.data, ’k’)
plt.ylabel(’1Hz Instrument [m/s]’)
plt.xlabel(’Time [s]’)
plt.show()
1500
1000
500
STS-2 [counts]
0
500
1000
1500
20000 5 10 15 20 25 30
8 1e 7
6
1Hz Instrument [m/s]
4
2
0
2
4
60 5 10 15 20 25 30
Time [s]
It is further possible to use evalresp to evaluate the instrument response information from a RESP file.
from obspy.fdsn import Client as FDSN_Client
from obspy.iris import Client as OldIris_Client
# Fetch waveform from IRIS FDSN web service into a ObsPy stream object
fdsn_client = FDSN_Client("IRIS")
st = fdsn_client.get_waveforms(’NZ’, ’BFZ’, ’10’, ’HHZ’, t1, t2)
# this can be the date of your raw data or any date for which the
# SEED RESP-file is valid
date = t1
# Remove instrument response using the information from the given RESP file
st.simulate(paz_remove=None, pre_filt=pre_filt, seedresp=seedresp)
plt.subplot(211)
plt.plot(time, tr_orig.data, ’k’)
plt.ylabel(’STS-2 [counts]’)
plt.subplot(212)
plt.plot(time, tr.data, ’k’)
plt.ylabel(’Displacement [m]’)
plt.xlabel(’Time [s]’)
plt.show()
A Parser object created using a Dataless SEED file can also be used. For each trace the respective RESP response
data is extracted internally then. When using Stream/Trace‘s simulate() convenience methods the “date”
1.5 1e7
1.0
STS-2 [counts]
0.5
0.0
0.5
1.0
1.50 200 400 600 800 1000 1200 1400 1600 1800
0.010
Displacement [m]
0.005
0.000
0.005
0.0100 200 400 600 800 1000 1200 1400 1600 1800
Time [s]
st = read("http://examples.obspy.org/BW.BGLD..EH.D.2010.037")
parser = Parser("http://examples.obspy.org/dataless.seed.BW_BGLD")
st.simulate(seedresp={’filename’: parser, ’units’: "DIS"})
The following code example shows how to clone an existing DatalessSEED file (dataless.seed.BW_RNON) and
use it as a template to build up a DatalessSEED file for a new station.
First of all, we have to make the necessary imports and read the existing DatalessSEED volume (stored on our examples
webserver):
>>> from obspy.core import UTCDateTime
>>> from obspy.xseed import Parser
>>>
>>> p = Parser("http://examples.obspy.org/dataless.seed.BW_RNON")
>>> blk = p.blockettes
Now we can adapt the information only appearing once in the DatalessSEED at the start of the file, in this case
Blockette 50 and the abbreviations in Blockette 33:
>>> blk[50][0].network_code = ’BW’
>>> blk[50][0].station_call_letters = ’RMOA’
>>> blk[50][0].site_name = "Moar Alm, Bavaria, BW-Net"
>>> blk[50][0].latitude = 47.761658
>>> blk[50][0].longitude = 12.864466
>>> blk[50][0].elevation = 815.0
>>> blk[50][0].start_effective_date = UTCDateTime("2006-07-18T00:00:00.000000Z")
>>> blk[50][0].end_effective_date = ""
>>> blk[33][1].abbreviation_description = "Lennartz LE-3D/1 seismometer"
After that we have to change the information for all of the three channels involved:
>>> mult = len(blk[58])/3
>>> for i, cha in enumerate([’Z’, ’N’, ’E’]):
... blk[52][i].channel_identifier = ’EH%s’ % cha
... blk[52][i].location_identifier = ’’
... blk[52][i].latitude = blk[50][0].latitude
... blk[52][i].longitude = blk[50][0].longitude
... blk[52][i].elevation = blk[50][0].elevation
... blk[52][i].start_date = blk[50][0].start_effective_date
... blk[52][i].end_date = blk[50][0].end_effective_date
... blk[53][i].number_of_complex_poles = 3
... blk[53][i].real_pole = [-4.444, -4.444, -1.083]
... blk[53][i].imaginary_pole = [+4.444, -4.444, +0.0]
... blk[53][i].real_pole_error = [0, 0, 0]
... blk[53][i].imaginary_pole_error = [0, 0, 0]
... blk[53][i].number_of_complex_zeros = 3
... blk[53][i].real_zero = [0.0, 0.0, 0.0]
... blk[53][i].imaginary_zero = [0.0, 0.0, 0.0]
... blk[53][i].real_zero_error = [0, 0, 0]
... blk[53][i].imaginary_zero_error = [0, 0, 0]
Note: FIR coefficients are not set in this example. In case you require correct FIR coefficients, either clone from an
existing dataless file with the same seismometer type or set the corresponding blockettes with the correct values.
At the end we can write the adapted DatalessSEED volume to a new file:
>>> p.writeSEED("dataless.seed.BW_RMOA")
The following example shows how to read in a waveform file with Python and save each Trace in the resulting
Stream object to one MATLAB .MAT file. The data can the be loaded from within MATLAB with the load
function.
from obspy.core import read
from scipy.io import savemat
st = read("http://examples.obspy.org/BW.BGLD..EH.D.2010.037")
for i, tr in enumerate(st):
mdict = dict([[j, str(k)] for j, k in tr.stats.iteritems()])
mdict[’data’] = tr.data
savemat("data-%d.mat" % i, mdict)
You may directly export waveform data to any ASCII format available by ObsPy using the write() method on the
generated Stream object.
>>> from obspy.core import read
>>> stream = read(’http://examples.obspy.org/RJOB20090824.ehz’)
>>> stream.write(’outfile.ascii’, format=’SLIST’)
• TSPAIR, a ASCII format where data is written in time-sample pairs (see also TSPAIR format
description):
TIMESERIES BW_RJOB__EHZ_D, 6001 samples, 200 sps, 2009-08-24T00:20:03.000000, TSPAIR, INTEGER,
2009-08-24T00:20:03.000000 288
2009-08-24T00:20:03.005000 300
2009-08-24T00:20:03.010000 292
2009-08-24T00:20:03.015000 285
2009-08-24T00:20:03.020000 265
2009-08-24T00:20:03.025000 287
...
In the following, a small Python script is shown which converts each Trace of a seismogram file to an ASCII file
with a custom header. Waveform data will be multiplied by a given calibration factor and written using NumPy‘s
savetxt() function.
"""
USAGE: export_seismograms_to_ascii.py in_file out_file calibration
"""
from obspy.core import read
import numpy as np
import sys
try:
in_file = sys.argv[1]
out_file = sys.argv[2]
calibration = float(sys.argv[3])
except:
print __doc__
raise
st = read(in_file)
for i, tr in enumerate(st):
f = open("%s_%d" % (out_file, i), "w")
f.write("# STATION %s\n" % (tr.stats.station))
f.write("# CHANNEL %s\n" % (tr.stats.channel))
f.write("# START_TIME %s\n" % (str(tr.stats.starttime)))
f.write("# SAMP_FREQ %f\n" % (tr.stats.sampling_rate))
f.write("# NDAT %d\n" % (tr.stats.npts))
np.savetxt(f, tr.data * calibration, fmt="%f")
f.close()
The following lines show how you can convert anything to MiniSEED format. In the example, a few lines of a weather
station output are written to a MiniSEED file. The correct meta information starttime, the sampling_rate,
station name and so forth are also encoded (Note: Only the ones given are allowed by the MiniSEED standard).
Converting arbitrary ASCII to MiniSEED is extremely helpful if you want to send log messages, output of meteoro-
logic stations or anything else via the SeedLink protocol.
import numpy as np
from obspy.core import read, Trace, Stream, UTCDateTime
weather = """
00.0000 0.0 ??? 4.7 97.7 1015.0 0.0 010308 000000
00.0002 0.0 ??? 4.7 97.7 1015.0 0.0 010308 000001
00.0005 0.0 ??? 4.7 97.7 1015.0 0.0 010308 000002
00.0008 0.0 ??? 4.7 97.7 1015.4 0.0 010308 000003
00.0011 0.0 ??? 4.7 97.7 1015.0 0.0 010308 000004
00.0013 0.0 ??? 4.7 97.7 1015.0 0.0 010308 000005
00.0016 0.0 ??? 4.7 97.7 1015.0 0.0 010308 000006
00.0019 0.0 ??? 4.7 97.7 1015.0 0.0 010308 000007
"""
The following lines show how to create a graphical representation of a focal mechanism.
from obspy.imaging.beachball import Beachball
The following example shows how to plot beachballs into a basemap plot together with some stations. The example
requires the basemap package (download site) to be installed. The SRTM file used can be downloaded here.
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from obspy.imaging.beachball import Beach
import gzip
# create grids and compute map projection coordinates for lon/lat grid
x, y = m(*np.meshgrid(lons, lats))
for i in range(len(focmecs)):
b = Beach(focmecs[i], xy=(x[i], y[i]), width=1000, linewidth=1)
b.set_zorder(10)
ax.add_collection(b)
plt.show()
The first lines of our SRTM data file (from CGIAR) look like this:
ncols 400
nrows 200
xllcorner 12°40’E
yllcorner 47°40’N
xurcorner 13°00’E
yurcorner 47°50’N
cellsize 0.00083333333333333
NODATA_value -9999
682 681 685 690 691 689 678 670 675 680 681 679 675 671 674 680 679 679 675 671 668 664 659 660 656 6
RMOA
RTSH
47.75°N 47.75°N
RNON
RJOB
47.70°N 47.70°N
12.75°E 12.80°E 12.85°E 12.90°E 12.95°E
Some notes:
• The Python package GDAL allows you to directly read a GeoTiff into NumPy ndarray
>>> geo = gdal.Open("file.geotiff")
>>> x = geo.ReadAsArray()
m.drawcoastlines()
m.fillcontinents()
m.drawparallels(np.arange(-90., 120., 30.))
m.drawmeridians(np.arange(0., 420., 60.))
m.drawmapboundary()
x, y = m(142.36929, 38.3215)
focmecs = [0.136, -0.591, 0.455, -0.396, 0.046, -0.615]
ax = plt.gca()
b = Beach(focmecs, xy=(x, y), width=10, linewidth=1, alpha=0.85)
b.set_zorder(10)
ax.add_collection(b)
plt.show()
The rpy2 package allows to interface R from Python. The following example shows how to convert data
(numpy.ndarray) to an R matrix and execute the R command summary on it.
>>> from obspy.core import read
>>> import rpy2.robjects as RO
>>> import rpy2.robjects.numpy2ri
>>> r = RO.r
>>> st = read("test/BW.BGLD..EHE.D.2008.001")
>>> M = RO.RMatrix(st[0].data)
>>> print r.summary(M)
Min. 1st Qu. Median Mean 3rd Qu. Max.
-1056.0 -409.0 -393.0 -393.7 -378.0 233.0
Coordinate conversions can be done conveniently using pyproj. After looking up the EPSG codes of source and target
coordinate system, the conversion can be done in just a few lines of code. The following example converts the station
coordinates of two German stations to the regionally used Gauß-Krüger system:
>>> import pyproj
>>> lat = [49.6919, 48.1629]
>>> lon = [11.2217, 11.2752]
An implementation of hierarchical clustering is provided in the package hcluster. Among other things, it allows to
build clusters from similarity matrices and make dendrogram plots. The following example shows how to do this for an
already computed similarity matrix. The similarity data are computed from events in an area with induced seismicity
(using the cross-correlation routines in obspy.signal) and can be fetched from our examples webserver:
First, we import the necessary modules and load the data stored on our webserver:
>>> import pickle, urllib
>>> import matplotlib.pyplot as plt
>>> import hcluster
>>>
>>> url = "http://examples.obspy.org/dissimilarities.pkl"
>>> dissimilarity = pickle.load(urllib.urlopen(url))
Now, we can start building up the plots. First, we plot the dissimilarity matrix:
>>> plt.subplot(121)
>>> plt.imshow(1 - dissimilarity, interpolation="nearest")
After that, we use hcluster to build up and plot the dendrogram into the right-hand subplot:
>>> dissimilarity = hcluster.squareform(dissimilarity)
>>> threshold = 0.3
>>> linkage = hcluster.linkage(dissimilarity, method="single")
>>> clusters = hcluster.fcluster(linkage, 0.3, criterion="distance")
>>>
>>> plt.subplot(122)
>>> hcluster.dendrogram(linkage, color_threshold=0.3)
>>> plt.xlabel("Event number")
>>> plt.ylabel("Dissimilarity")
>>> plt.show()
The following code example shows how to use the PPSD class defined in obspy.signal. The routine is useful
for interpretation of e.g. noise measurements for site quality control checks. For more information on the topic see
[McNamara2004].
>>> from obspy.core import read
>>> from obspy.xseed import Parser
>>> from obspy.signal import PPSD
Read data and select a trace with the desired station/channel combination:
0.6
0
0.5
5
10 0.4
Dissimilarity
15
20 0.3
25
0.2
30
0 5 10 15 20 25 30
0.1
0.0
31
1
2157
185
31122
1193
7
124
25
169
1260
8
32106
22241
31
2287
2203
9
0
Event number
>>> st = read("http://examples.obspy.org/BW.KW1..EHZ.D.2011.037")
>>> tr = st.select(id="BW.KW1..EHZ")[0]
Get poles and zeros information, e.g. from a dataless SEED file. Then initialize a new PPSD instance. The ppsd object
will then make sure that only appropriate data go into the probabilistic psd statistics.
>>> parser = Parser("http://examples.obspy.org/dataless.seed.BW_KW1")
>>> paz = parser.getPAZ(tr.id)
>>> ppsd = PPSD(tr.stats, paz)
Now we can add data (either trace or stream objects) to the ppsd estimate. This step may take a while. The return
value True indicates that the data was successfully added to the ppsd estimate.
>>> ppsd.add(st)
True
We can check what time ranges are represented in the ppsd estimate. ppsd.times contains a sorted list of start
times of the one hour long slices that the psds are computed from (here only the first two are printed).
>>> print ppsd.times[:2]
[UTCDateTime(2011, 2, 6, 0, 0, 0, 935000), UTCDateTime(2011, 2, 6, 0, 30, 0, 935000)]
>>> print "number of psd segments:", len(ppsd.times)
number of psd segments: 47
Adding the same stream again will do nothing (return value False), the ppsd object makes sure that no overlapping
data segments go into the ppsd estimate.
>>> ppsd.add(st)
False
>>> print "number of psd segments:", len(ppsd.times)
number of psd segments: 47
Below the actual PPSD (for a detailed discussion see [McNamara2004]) is a visualization of the data basis for the
PPSD (can also be switched off during plotting). The top row shows data fed into the PPSD, green patches represent
available data, red patches represent gaps in streams that were added to the PPSD. The bottom row in blue shows the
single psd measurements that go into the histogram. The default processing method fills gaps with zeros, these data
segments then show up as single outlying psd lines.
Note: Providing metadata from e.g. a Dataless SEED volume is safer than specifying static poles and zeros informa-
tion (see PPSD).
18
120 15
[%]
140 12
160 9
6
180 3
200
0.01 0.10 1.00 10.00 100.00 0
Period [s]
0 0: 00 00:00 00:00 00:00 00:00 00:00 00:00 00:00 00:00 00:00 00:00 00:00
04: 08: 12: 16: 20: 00: 04: 08: 12: 16: 20: 00:
The following code block shows how to plot the array transfer function for beam forming as a function of wavenumber
using the ObsPy function obspy.signal.array_analysis.array_transff_wavenumber().
import matplotlib.pyplot as plt
import numpy as np
from obspy.signal.array_analysis import array_transff_wavenumber
# coordinates in km
coords /= 1000.
# plot
plt.pcolor(np.arange(kxmin, kxmax + kstep * 1.1, kstep) - kstep / 2.,
np.arange(kymin, kymax + kstep * 1.1, kstep) - kstep / 2.,
transff.T)
plt.colorbar()
plt.clim(vmin=0., vmax=1.)
plt.xlim(kxmin, kxmax)
plt.ylim(kymin, kymax)
plt.show()
The following is a short example for a continuous wavelet transform using ObsPy’s internal routine based on [Kris-
tekova2006].
import numpy as np
import matplotlib.pyplot as plt
from obspy.core import read
from obspy.signal.tf_misfit import cwt
st = read()
tr = st[0]
npts = tr.stats.npts
dt = tr.stats.delta
t = np.linspace(0, dt * npts, npts)
40 1.0
30 0.9
0.8
20
0.7
10 0.6
0 0.5
10 0.4
0.3
20
0.2
30 0.1
4040 30 20 10 0 10 20 30 40 0.0
f_min = 1
f_max = 50
fig = plt.figure()
ax = fig.add_subplot(111)
x, y = np.meshgrid(
t,
np.logspace(np.log10(f_min), np.log10(f_max), scalogram.shape[0]))
ax.pcolormesh(x, y, np.abs(scalogram))
ax.set_xlabel("Time after %s [s]" % tr.stats.starttime)
ax.set_ylabel("Frequency [Hz]")
ax.set_yscale(’log’)
ax.set_ylim(f_min, f_max)
plt.show()
Frequency [Hz]
101
100 0 5 10 15 20 25 30
Time after 2009-08-24T00:20:03.000000Z [s]
Small script doing the continuous wavelet transform using the mlpy package (version 3.5.0) for infrasound data
recorded at Yasur in 2008. Further details on wavelets can be found at Wikipedia - in the article the omega0 fac-
tor is denoted as sigma. (really sloppy and possibly incorrect: the omega0 factor tells you how often the wavelet fits
into the time window, dj defines the spacing in the scale domain)
import matplotlib.pyplot as plt
from obspy.core import read
import numpy as np
import mlpy
tr = read("http://examples.obspy.org/a02i.2008.240.mseed")[0]
omega0 = 8
wavelet_fct = "morlet"
scales = mlpy.wavelet.autoscales(N=len(tr.data), dt=tr.stats.delta, dj=0.05,
wf=wavelet_fct, p=omega0)
spec = mlpy.wavelet.cwt(tr.data, dt=tr.stats.delta, scales=scales,
wf=wavelet_fct, p=omega0)
# approximate scales through frequencies
freq = (omega0 + np.sqrt(2.0 + omega0 ** 2)) / (4 * np.pi * scales[1:])
fig = plt.figure()
ax1 = fig.add_axes([0.1, 0.75, 0.7, 0.2])
ax2 = fig.add_axes([0.1, 0.1, 0.7, 0.60])
ax3 = fig.add_axes([0.83, 0.1, 0.03, 0.6])
t = np.arange(tr.stats.npts) / tr.stats.sampling_rate
ax1.plot(t, tr.data, ’k’)
img = ax2.imshow(np.abs(spec), extent=[t[0], t[-1], freq[-1], freq[0]],
aspect=’auto’, interpolation="nearest")
ax2.set_yscale(’log’)
fig.colorbar(img, cax=ax3)
plt.show()
The tf_misfit module offers various Time Frequency Misfit Functions based on [Kristekova2006] and [Kris-
tekova2009].
Here are some examples how to use the included plotting tools:
import numpy as np
from obspy.signal.tf_misfit import plotTfr
# general constants
tmax = 6.
dt = 0.01
npts = int(tmax / dt + 1)
t = np.linspace(0., tmax, npts)
fmin = .5
fmax = 10
120000
100000
80000
60000
40000
20000
0
20000
400000 5 10 15 20 25 30 35
140000
101 120000
100000
80000
100
60000
40000
10-1 20000
0 5 10 15 20 25 30 0
f1 = 2.
phi1 = 0.
101
0.32
0.28
0.24
0.20
frequency
0.16
0.12
100 0.08
0.04
0.8
0.5 0.4 0.3 0.2 0.1 0.00
0.60.0
0.4
0.2
0.0
0.2
0.4
0.6
0 1 2 3 4 5 6
time
Time Frequency Misfits are appropriate for smaller differences of the signals. Continuing the example from above:
from scipy.signal import hilbert
from obspy.signal.tf_misfit import plotTfMisfits
# reference signal
st2 = st1.copy()
plt.show()
101
FEM TFEM
frequency
100
0.10
0.100.050.000.050.10
0.05 TEM
0.00
0.05
0.10
0.5
EM = 0.10 0.0
PM = 0.00
0.5
101 0.100
FPM TFPM 0.075
frequency
0.050
0.025
100 0.000
0.10
0.100.050.000.050.10 0.025
0.05 TPM 0.050
0.00 0.075
0.05
0.10
0 1 2 3 4 5 6 0.100
time
Time Frequency GOFs are appropriate for large differences of the signals. Continuing the example from above:
from obspy.signal.tf_misfit import plotTfGofs
101
FEM TFEM
frequency
100
0.10
0.100.050.000.050.10
0.05 TEM
0.00
0.05
0.10
0.5
EM = 0.00 0.0
PM = 0.10
0.5
101 0.10
FPM TFPM 0.08
frequency
0.06
0.04
0.02
100 0.00
0.10 0.02
0.100.050.000.050.10
0.05 TPM 0.04
0.00 0.06
0.05 0.08
0.10
0 1 2 3 4 5 6 0.10
time
plt.show()
101
FEG TFEG
frequency
100
0 2 4 6 8 108 TEG
6
4
022
1
EG = 1.35
PG = 10.00 0
1
101 2 10
FPG TFPG 9
frequency
8
7
6
100 5
4
0 2 4 6 8 108 TPG 3
6 2
4 1
020 1 2 3 4 5 6 0
time
For multi component data and global normalization of the misfits, the axes are scaled accordingly. Continuing the
example from above:
# amplitude error
amp_fac = 1.1
# reference signals
st2_1 = st1.copy()
101
FEG TFEG
frequency
100
0 2 4 6 8 108 TEG
6
4
02
0.5
EG = 9.99 0.0
PG = 2.00
0.5
101 10
FPG TFPG 9
frequency
8
7
6
100 5
4
0 2 4 6 8 108 TPG 3
6 2
4 1
020 1 2 3 4 5 6 0
time
st2_2 = st1.copy() * 5.
st2 = np.c_[st2_1, st2_2].T
101
FEM TFEM
frequency
100
0.10
0.100.050.000.050.10
0.05 TEM
0.00
0.05
0.104
3
21
EM = 0.02 01
PM = 0.00
32
101 4 0.100
FPM TFPM 0.075
frequency
0.050
0.025
100 0.000
0.10
0.100.050.000.050.10 0.025
0.05 TPM 0.050
0.00 0.075
0.05
0.10
0 1 2 3 4 5 6 0.100
time
Local normalization allows to resolve frequency and time ranges away from the largest ampiltude waves, but tend to
produce artifacts in regions where there is no energy at all. In this analytical example e.g. for the high frequencies
before the onset of the signal. Manual setting of the limits is thus necessary:
# amplitude and phase error
amp_fac = 1.1
# reference signal
st2 = st1.copy()
101
FEM TFEM
frequency
100
0.10
0.100.050.000.050.10
0.05 TEM
0.00
0.05
0.104
3
21
EM = 0.10 01
PM = 0.00
32
101 4 0.100
FPM TFPM 0.075
frequency
0.050
0.025
100 0.000
0.10
0.100.050.000.050.10 0.025
0.05 TPM 0.050
0.00 0.075
0.05
0.10
0 1 2 3 4 5 6 0.100
time
plt.show()
101
FEM TFEM
frequency
100
0.10
0.100.050.000.050.10
0.05 TEM
0.00
0.05
0.10
0.5
EM = 0.10 0.0
PM = 0.00
0.5
101 0.100
FPM TFPM 0.075
frequency
0.050
0.025
100 0.000
0.10
0.100.050.000.050.10 0.025
0.05 TPM 0.050
0.00 0.075
0.05
0.10
0 1 2 3 4 5 6 0.100
time
Often, you have a bunch of data and want to know which station is available at what time. For this purpose, ObsPy
ships the obspy-scan script (automatically available after installation), which detects the file format (MiniSEED,
SAC, SACXY, GSE2, SH-ASC, SH-Q, SEISAN, etc.) from the header of the data files. Gaps are plotted as vertical
red lines, start times of available data are plotted as crosses - the data itself are plotted as horizontal lines.
The script can be used to scan through 1000s of files (already used with 30000 files, execution time ca. 45min),
month/year ranges are plotted automatically. It opens an interactive plot in which you can zoom in ...
Execute something like following line from the command prompt, use e.g. wildcards to match the files:
$ obspy-scan /bay_mobil/mobil/20090622/1081019/*_1.*
101
FEM TFEM
frequency
100
0.000.050.10
0.100.05 0.10
0.05 TEM
0.00
0.05
0.10
0.5
EM = 0.10 0.0
PM = 0.00
0.5
101 0.15
FPM TFPM 0.12
frequency
0.09
0.06
0.03
100 0.00
0.03
0.000.050.10
0.100.05 0.10
0.05 TPM 0.06
0.00 0.09
0.05 0.12
0.10
0 1 2 3 4 5 6 0.15
time
The following lines show how to create a simple travel time plot for a given distance range, selected phases and the
iasp91 velocity model using the travelTimePlot() function of the module obspy.taup.
from obspy.taup.taup import travelTimePlot
This example shows how to align the waveforms of phase onsets of two earthquakes in order to correct the original
pick times that can never be set perfectly consistent in routine analysis. A parabola is fit to the concave part of the
cross correlation function around its maximum, following the approach by [Deichmann1992].
To adjust the parameters (i.e. the used time window around the pick and the filter settings) and to validate and check
the results the options plot and filename can be used to open plot windows or save the figure to a file.
See the documentation of xcorrPickCorrection() for more details.
16
P
14 S
PP
12
Time (minutes)
10
20 10 20 30 40 50
Distance (degrees)
# estimate the time correction for pick 2 without any preprocessing and open
# a plot window to visually validate the results
dt, coeff = xcorrPickCorrection(t1, tr1, t2, tr2, 0.05, 0.2, 0.1, plot=True)
print "No preprocessing:"
print " Time correction for pick 2: %.6f" % dt
print " Correlation coefficient: %.2f" % coeff
# estimate the time correction with bandpass prefiltering
dt, coeff = xcorrPickCorrection(t1, tr1, t2, tr2, 0.05, 0.2, 0.1, plot=True,
filter="bandpass", filter_options={’freqmin’: 1, ’freqmax’: 10})
print "Bandpass prefiltering:"
print " Time correction for pick 2: %.6f" % dt
print " Correlation coefficient: %.2f" % coeff
The example will print the time correction for pick 2 and the respective correlation coefficient and open a plot window
for correlations on both the original and preprocessed data:
No preprocessing:
Time correction for pick 2: -0.014459
Correlation coefficient: 0.92
Bandpass prefiltering:
Time correction for pick 2: -0.013025
Correlation coefficient: 0.98
1.0 BW.UH1..EHZ
0.8
0.6
norm. amplitude
0.4
0.2
0.0
0.2 Trace 1
0.4 Trace 2
0.6 Trace 2 (shifted)
0.8
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
1.0 time [s]
correlation coefficient
0.5
0.0
xcorr (convex)
xcorr (concave)
0.5 used for fitting
fit
vertex
1.00.10 0.05 0.00 0.05 0.10 0.15
0.92 at 0.014 seconds correction
1.0 BW.UH1..EHZ
0.5
norm. amplitude
0.0
0.5 Trace 1
Trace 2
Trace 2 (shifted)
1.0
0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
1.0 time [s]
correlation coefficient
0.5
0.0
xcorr (convex)
xcorr (concave)
0.5 used for fitting
fit
vertex
1.00.10 0.05 0.00 0.05 0.10 0.15
0.98 at 0.013 seconds correction
TWO
ADVANCED EXERCISE
In the advanced exercise we show how ObsPy can be used to develop an automated processing workflow. We start out
with very simple tasks and then automate the routine step by step. For all exercises solutions are provided.
This practical intends to demonstrate how ObsPy can be used to develop workflows for data processing and analysis
that have a short, easy to read and extensible source code. The overall task is to automatically estimate local magnitudes
of earthquakes using data of the SED network. We will start with simple programs with manually specified, hard-coded
values and build on them step by step to make the program more flexible and dynamic. Some details in the magnitude
estimation should be done a little bit different technically but we rather want to focus on the general workflow here.
81
ObsPy Tutorial, Release 0.9.2
Fetch a list of events from EMSC for the region of Valais/SW-Switzerland on 3rd April of 2012. Use the Client
provided in obspy.neries. Note down the catalog origin times, epicenters and magnitudes.
1. Use the file LKBD_WA_CUT.MSEED to read MiniSEED waveform data of the larger earthquake. These data
have already been simulated to (demeaned) displacement on a Wood-Anderson seismometer (in meter) and
trimmed to the right time span. Compute the absolute maximum for both North and East component and use
the larger value as the zero-to-peak amplitude estimate. Estimate the local magnitude Mlh used at the Swiss
Seismological Service (SED) using a epicentral distance of depi = 20 (km), a = 0.018 and b = 2.17 with the
following formula (mathematical functions are available in Python’s math module):
mm
Mlh = log10 amp · 1000 + a · depi + b
m
2. Calculate the epicentral distance from the station coordinates (46.387°N, 7.627°E) and catalog epi-
center fetched above (46.218°N, 7.706°E). Some useful routines for such tasks are included in
obspy.core.util.geodetics.
1. Modify the existing code and use the file LKBD.MSEED to read the original MiniSEED waveform data in
counts. Set up two dictionaries containing the response information of both the original instrument (a LE3D-
5s) and the Wood-Anderson seismometer in poles-and-zeros formulation. Please note that for historic reasons
the naming of keys differs from the usual naming. Each PAZ dictionary needs to contain sensitivity (overall
sensitivity of seismometer/digitizer combination), gain (A0 / normalization factor), poles and zeros. Check that
the value of water_level is not too high, to avoid overamplified low frequency noise at short-period stations.
After the instrument simulation, trim the waveform to a shorter time window around the origin time (2012-04-
03T02:45:03) and calculate Mlh like before. Use the following values for the PAZ dictionaries:
– LE3D-5s Wood-Anderson
poles -0.885+0.887j -0.885-0.887j -0.427+0j -6.2832-4.7124j -6.2832+4.7124j
zeros 0j, 0j, 0j 0j
gain 1.009 1
sensitivity 167364000.0 2800
2. Instead of the hard-coded values, read the response information from a locally stored dataless SEED
LKBD.dataless. Use the Parser of module obspy.xseed to extract the poles-and-zeros information of the
used channel.
3. We can also request the response information from WebDC using the ArcLink protocol. Use the Client provided
in obspy.arclink module (specify e.g. user=”[email protected]”).
1. Modify the existing code and fetch waveform data around the origin time given above for station LKBD (network
CH) via ArcLink from WebDC using obspy.arclink. Use a wildcarded channel=”EH*” to fetch all three compo-
nents. Use keyword argument metadata=True to fetch response information and station coordinates along with
the waveform. The PAZ and coordinate information will get attached to the Stats object of all traces in the
returned Stream object during the waveform request automatically. During instrument simulation use keyword
argument paz_remove=’self’ to use every trace’s attached PAZ information fetched from WebDC. Calculate
Mlh like before.
2. Use a list of station names (e.g. LKBD, SIMPL, DIX) and perform the magnitude estimation in a loop for
each station. Use a wildcarded channel=”[EH]H*” to fetch the respective streams for both short-period and
broadband stations. Compile a list of all station magnitudes and compute the network magnitude as its median
(available in numpy module).
3. Extend the network magnitude estimate by using all available stations in network CH. Get a list of stations using
the ArcLink client and loop over this list. Use a wildcarded channel=”[EH]H[ZNE]”, check if there are three
traces in the returned stream and skip to next station otherwise (some stations have inconsistent component
codes). Put a try/except around the waveform request and skip to the next station and avoid interruption of the
routine in case no data can be retrieved and an Exception gets raised. Also add an if/else and use a = 0.0038
and b = 3.02 in station magnitude calculation for epicentral distances of more than 60 kilometers.
In this additional advanced exercise we can enhance the routine to be independent of a-priori known origin times by
using a coincidence network trigger for event detection.
• fetch a few hours of Z component data for 6 stations in Valais / SW-Switzerland
• run a coincidence trigger like shown in the Trigger Tutorial
• loop over detected network triggers, store the coordinates of the closest station as the epicenter
• loop over triggers, use the trigger time to select the time window and use the network magnitude estimation
code like before
2.1.6 Solutions
import obspy.neries
client = obspy.neries.Client()
st = read("../data/LKBD_WA_CUT.MSEED")
tr_n = st.select(component="N")[0]
ampl_n = max(abs(tr_n.data))
tr_e = st.select(component="E")[0]
ampl_e = max(abs(tr_e.data))
epi_dist = 20
a = 0.018
b = 2.17
ml = log10(ampl * 1000) + a * epi_dist + b
print ml
st = read("../data/LKBD_WA_CUT.MSEED")
tr_n = st.select(component="N")[0]
ampl_n = max(abs(tr_n.data))
tr_e = st.select(component="E")[0]
ampl_e = max(abs(tr_e.data))
ampl = max(ampl_n, ampl_e)
sta_lat = 46.38703
sta_lon = 7.62714
event_lat = 46.218
event_lon = 7.706
a = 0.018
b = 2.17
ml = log10(ampl * 1000) + a * epi_dist + b
print ml
st = read("../data/LKBD.MSEED")
t = UTCDateTime("2012-04-03T02:45:03")
st.trim(t, t + 50)
tr_n = st.select(component="N")[0]
ampl_n = max(abs(tr_n.data))
tr_e = st.select(component="E")[0]
ampl_e = max(abs(tr_e.data))
ampl = max(ampl_n, ampl_e)
sta_lat = 46.38703
sta_lon = 7.62714
event_lat = 46.218
event_lon = 7.706
a = 0.018
b = 2.17
ml = log10(ampl * 1000) + a * epi_dist + b
print ml
st = read("../data/LKBD.MSEED")
parser = Parser("../data/LKBD.dataless")
paz_le3d5s = parser.getPAZ("CH.LKBD..EHZ")
t = UTCDateTime("2012-04-03T02:45:03")
st.trim(t, t + 50)
tr_n = st.select(component="N")[0]
ampl_n = max(abs(tr_n.data))
tr_e = st.select(component="E")[0]
ampl_e = max(abs(tr_e.data))
ampl = max(ampl_n, ampl_e)
sta_lat = 46.38703
sta_lon = 7.62714
event_lat = 46.218
event_lon = 7.706
a = 0.018
b = 2.17
ml = log10(ampl * 1000) + a * epi_dist + b
print ml
st = read("../data/LKBD.MSEED")
client = Client(user="[email protected]")
t = st[0].stats.starttime
paz_le3d5s = client.getPAZ("CH", "LKBD", "", "EHZ", t)
t = UTCDateTime("2012-04-03T02:45:03")
st.trim(t, t + 50)
tr_n = st.select(component="N")[0]
ampl_n = max(abs(tr_n.data))
tr_e = st.select(component="E")[0]
ampl_e = max(abs(tr_e.data))
ampl = max(ampl_n, ampl_e)
sta_lat = 46.38703
sta_lon = 7.62714
event_lat = 46.218
event_lon = 7.706
a = 0.018
b = 2.17
ml = log10(ampl * 1000) + a * epi_dist + b
print ml
client = Client(user="[email protected]")
t = UTCDateTime("2012-04-03T02:45:03")
st = client.getWaveform("CH", "LKBD", "", "EH*", t - 300, t + 300,
metadata=True)
tr_n = st.select(component="N")[0]
ampl_n = max(abs(tr_n.data))
tr_e = st.select(component="E")[0]
ampl_e = max(abs(tr_e.data))
ampl = max(ampl_n, ampl_e)
sta_lat = 46.38703
sta_lon = 7.62714
event_lat = 46.218
event_lon = 7.706
a = 0.018
b = 2.17
ml = log10(ampl * 1000) + a * epi_dist + b
print ml
client = Client(user="[email protected]")
t = UTCDateTime("2012-04-03T02:45:03")
tr_n = st.select(component="N")[0]
ampl_n = max(abs(tr_n.data))
tr_e = st.select(component="E")[0]
ampl_e = max(abs(tr_e.data))
ampl = max(ampl_n, ampl_e)
sta_lat = st[0].stats.coordinates.latitude
sta_lon = st[0].stats.coordinates.longitude
event_lat = 46.218
event_lon = 7.706
a = 0.018
b = 2.17
ml = log10(ampl * 1000) + a * epi_dist + b
print station, ml
mags.append(ml)
net_mag = median(mags)
print "Network magnitude:", net_mag
client = Client(user="[email protected]")
t = UTCDateTime("2012-04-03T02:45:03")
tr_n = st.select(component="N")[0]
ampl_n = max(abs(tr_n.data))
tr_e = st.select(component="E")[0]
ampl_e = max(abs(tr_e.data))
ampl = max(ampl_n, ampl_e)
sta_lat = st[0].stats.coordinates.latitude
sta_lon = st[0].stats.coordinates.longitude
event_lat = 46.218
event_lon = 7.706
net_mag = median(mags)
print "Network magnitude:", net_mag
client = Client(user="[email protected]")
t = UTCDateTime("2012-04-03T01:00:00")
t2 = t + 4 * 3600
st.taper()
st.filter("bandpass", freqmin=1, freqmax=20)
triglist = coincidenceTrigger("recstalta", 10, 2, st, 4, sta=0.5, lta=10)
print len(triglist), "events triggered."
tr_n = st.select(component="N")[0]
ampl_n = max(abs(tr_n.data))
tr_e = st.select(component="E")[0]
ampl_e = max(abs(tr_e.data))
ampl = max(ampl_n, ampl_e)
sta_lat = st[0].stats.coordinates.latitude
sta_lon = st[0].stats.coordinates.longitude
event_lat = trig[’latitude’]
event_lon = trig[’longitude’]
net_mag = median(mags)
print "Network magnitude:", net_mag
A
arPick() (in module obspy.signal.trigger), 31
C
carlSTATrig() (in module obspy.signal.trigger), 29
classicSTALTA() (in module obspy.signal.trigger), 30
D
delayedSTALTA() (in module obspy.signal.trigger), 30
P
pkBaer() (in module obspy.signal.trigger), 30
R
recSTALTA() (in module obspy.signal.trigger), 29
Z
zDetect() (in module obspy.signal.trigger), 30
93