tfp.sts.decompose_by_component
Stay organized with collections
Save and categorize content based on your preferences.
Decompose an observed time series into contributions from each component.
tfp.sts.decompose_by_component(
model, observed_time_series, parameter_samples
)
Used in the notebooks
This method decomposes a time series according to the posterior represention
of a structural time series model. In particular, it:
- Computes the posterior marginal mean and covariances over the additive
model's latent space.
- Decomposes the latent posterior into the marginal blocks for each
model component.
- Maps the per-component latent posteriors back through each component's
observation model, to generate the time series modeled by that component.
Args |
model
|
An instance of tfp.sts.Sum representing a structural time series
model.
|
observed_time_series
|
optional float Tensor of shape
batch_shape + [T, 1] (omitting the trailing unit dimension is also
supported when T > 1 ), specifying an observed time series. Any NaN s
are interpreted as missing observations; missingness may be also be
explicitly specified by passing a tfp.sts.MaskedTimeSeries instance.
|
parameter_samples
|
Python list of Tensors representing posterior
samples of model parameters, with shapes [concat([
[num_posterior_draws], param.prior.batch_shape,
param.prior.event_shape]) for param in model.parameters] . This may
optionally also be a map (Python dict ) of parameter names to
Tensor values.
|
Returns |
component_dists
|
A collections.OrderedDict instance mapping
component StructuralTimeSeries instances (elements of model.components )
to tfd.Distribution instances representing the posterior marginal
distributions on the process modeled by each component. Each distribution
has batch shape matching that of posterior_means /posterior_covs , and
event shape of [num_timesteps] .
|
Examples
Suppose we've built a model and fit it to data:
day_of_week = tfp.sts.Seasonal(
num_seasons=7,
observed_time_series=observed_time_series,
name='day_of_week')
local_linear_trend = tfp.sts.LocalLinearTrend(
observed_time_series=observed_time_series,
name='local_linear_trend')
model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],
observed_time_series=observed_time_series)
num_steps_forecast = 50
samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series)
To extract the contributions of individual components, pass the time series
and sampled parameters into decompose_by_component
:
component_dists = decompose_by_component(
model,
observed_time_series=observed_time_series,
parameter_samples=samples)
# Component mean and stddev have shape `[len(observed_time_series)]`.
day_of_week_effect_mean = component_dists[day_of_week].mean()
day_of_week_effect_stddev = component_dists[day_of_week].stddev()
Using the component distributions, we can visualize the uncertainty for
each component:
from matplotlib import pylab as plt
num_components = len(component_dists)
xs = np.arange(len(observed_time_series))
fig = plt.figure(figsize=(12, 3 * num_components))
for i, (component, component_dist) in enumerate(component_dists.items()):
# If in graph mode, replace `.numpy()` with `.eval()` or `sess.run()`.
component_mean = component_dist.mean().numpy()
component_stddev = component_dist.stddev().numpy()
ax = fig.add_subplot(num_components, 1, 1 + i)
ax.plot(xs, component_mean, lw=2)
ax.fill_between(xs,
component_mean - 2 * component_stddev,
component_mean + 2 * component_stddev,
alpha=0.5)
ax.set_title(component.name)
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[null,null,["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.sts.decompose_by_component\n\n\u003cbr /\u003e\n\n|----------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/sts/decomposition.py#L108-L219) |\n\nDecompose an observed time series into contributions from each component. \n\n tfp.sts.decompose_by_component(\n model, observed_time_series, parameter_samples\n )\n\n### Used in the notebooks\n\n| Used in the tutorials |\n|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| - [Structural Time Series Modeling Case Studies: Atmospheric CO2 and Electricity Demand](https://www.tensorflow.org/probability/examples/Structural_Time_Series_Modeling_Case_Studies_Atmospheric_CO2_and_Electricity_Demand) |\n\nThis method decomposes a time series according to the posterior represention\nof a structural time series model. In particular, it:\n\n- Computes the posterior marginal mean and covariances over the additive model's latent space.\n- Decomposes the latent posterior into the marginal blocks for each model component.\n- Maps the per-component latent posteriors back through each component's observation model, to generate the time series modeled by that component.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `model` | An instance of [`tfp.sts.Sum`](../../tfp/sts/Sum) representing a structural time series model. |\n| `observed_time_series` | optional `float` `Tensor` of shape `batch_shape + [T, 1]` (omitting the trailing unit dimension is also supported when `T \u003e 1`), specifying an observed time series. Any `NaN`s are interpreted as missing observations; missingness may be also be explicitly specified by passing a [`tfp.sts.MaskedTimeSeries`](../../tfp/sts/MaskedTimeSeries) instance. |\n| `parameter_samples` | Python `list` of `Tensors` representing posterior samples of model parameters, with shapes `[concat([ [num_posterior_draws], param.prior.batch_shape, param.prior.event_shape]) for param in model.parameters]`. This may optionally also be a map (Python `dict`) of parameter names to `Tensor` values. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|-------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|\n| `component_dists` | A `collections.OrderedDict` instance mapping component StructuralTimeSeries instances (elements of `model.components`) to `tfd.Distribution` instances representing the posterior marginal distributions on the process modeled by each component. Each distribution has batch shape matching that of `posterior_means`/`posterior_covs`, and event shape of `[num_timesteps]`. |\n\n\u003cbr /\u003e\n\n#### Examples\n\nSuppose we've built a model and fit it to data: \n\n day_of_week = tfp.sts.Seasonal(\n num_seasons=7,\n observed_time_series=observed_time_series,\n name='day_of_week')\n local_linear_trend = tfp.sts.LocalLinearTrend(\n observed_time_series=observed_time_series,\n name='local_linear_trend')\n model = tfp.sts.Sum(components=[day_of_week, local_linear_trend],\n observed_time_series=observed_time_series)\n\n num_steps_forecast = 50\n samples, kernel_results = tfp.sts.fit_with_hmc(model, observed_time_series)\n\nTo extract the contributions of individual components, pass the time series\nand sampled parameters into `decompose_by_component`: \n\n component_dists = decompose_by_component(\n model,\n observed_time_series=observed_time_series,\n parameter_samples=samples)\n\n # Component mean and stddev have shape `[len(observed_time_series)]`.\n day_of_week_effect_mean = component_dists[day_of_week].mean()\n day_of_week_effect_stddev = component_dists[day_of_week].stddev()\n\nUsing the component distributions, we can visualize the uncertainty for\neach component: \n\n from matplotlib import pylab as plt\n num_components = len(component_dists)\n xs = np.arange(len(observed_time_series))\n fig = plt.figure(figsize=(12, 3 * num_components))\n for i, (component, component_dist) in enumerate(component_dists.items()):\n\n # If in graph mode, replace `.numpy()` with `.eval()` or `sess.run()`.\n component_mean = component_dist.mean().numpy()\n component_stddev = component_dist.stddev().numpy()\n\n ax = fig.add_subplot(num_components, 1, 1 + i)\n ax.plot(xs, component_mean, lw=2)\n ax.fill_between(xs,\n component_mean - 2 * component_stddev,\n component_mean + 2 * component_stddev,\n alpha=0.5)\n ax.set_title(component.name)"]]