0% found this document useful (0 votes)
120 views162 pages

Slides l4 Ts

The document introduces time series analysis and its applications. It discusses the differences between time series data and cross-sectional data, providing examples of time series data that include annual engineering and education doctorates awarded in the US, monthly average carbon dioxide concentrations, daily stock prices, photos uploaded to Instagram per minute, and smartphone sensor accelerations during a workout. The main objectives of time series analysis are described as descriptive analytics, forecasting future values, and modeling the underlying data generation process.

Uploaded by

geetharaman1699
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views162 pages

Slides l4 Ts

The document introduces time series analysis and its applications. It discusses the differences between time series data and cross-sectional data, providing examples of time series data that include annual engineering and education doctorates awarded in the US, monthly average carbon dioxide concentrations, daily stock prices, photos uploaded to Instagram per minute, and smartphone sensor accelerations during a workout. The main objectives of time series analysis are described as descriptive analytics, forecasting future values, and modeling the underlying data generation process.

Uploaded by

geetharaman1699
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 162

[L4-TS] Introduction to Time

Series Analysis
KNIME AG

1
Download your course
materials from the KNIME Hub

2
Exercises – Getting Started

Click this KNIME Hub link to


access the public Courses space
on the KNIME Hub

Click L4-TS Introduction to Time


Series Analysis

© 2021 KNIME AG. All rights reserved. 3


Exercises – Getting Started
Import the workflow group to your local workspace
by drag and drop

© 2021 KNIME AG. All rights reserved. 4


Exercises – Getting Started
Alternatively, click the cloud icon to download the
workflow group. Launch KNIME Analytics Platform.

© 2021 KNIME AG. All rights reserved. 5


Exercises – Getting Started
Right click LOCAL in the KNIME
Explorer, and select Import KNIME
Workflow…

Click Browse…, navigate to the “L4-TS


Introduction to Time Series
Analysis.knar” file and click Finish

© 2021 KNIME AG. All rights reserved. 6


Exercises – Getting Started

Find the exercise materials in the


KNIME Explorer

Double click an exercise workflow to


open it. Follow the instructions.

© 2021 KNIME AG. All rights reserved. 7


KNIME Time Series Analysis
Course - Session 1

KNIME AG

Prof. Daniele Tonini


§ [email protected]
Maarit Widmann
§ [email protected]
Corey Weisinger
§ [email protected]
9
Agenda

§ Introduction: What is Time Series Analysis


§ Today’s Task, Dataset & Components
§ Descriptive Analytics: Load, Clean, Explore
§
§
§
§
§
§
§
§

© 2021 KNIME AG. All rights reserved. 10


Introduction
What is Time Series Analysis?
Introduction

Since social and economic conditions are constantly changing over time, data
analysts must be able to assess and predict the effects of these changes, in
order to suggest the most appropriate actions to take

§ It’s therefore required to use appropriate forecasting techniques to support


business, operations, technology, research, etc.
§ More accurate and less biased forecasts can be one of the most effective
driver of performance in many fields

à Time Series Analysis, using statistical methods, allows to enhance


comprehension and predictions on any quantitative variable of interest (sales,
resources, financial KPIs, logistics, sensors’ measurements, etc.)

© 2021 KNIME AG. All rights reserved. 12


Applications
The fields of application of Time series Analysis are numerous: Demand Planning
is one of the most common application, however, from industry to industry there are
other possible uses. For instance:

Logistics & § Forecasting of shipped packages: workforce planning


Transportation

Retail grocery § Forecasting of sales during promotions: optimizing warehouses

Insurance § Claims prediction: determining insurance policies

Manufacturing § Predictive Maintenance: improving operational efficiency

Energy &
Utilities
§ Energy load forecasting: better planning and trading strategies

© 2021 KNIME AG. All rights reserved. 13


TS data vs. Cross Sectional data

A Time series is made up by dynamic data collected over time! Consider the
differences between:

1. Cross Sectional Data


§ Multiple objects observed at a particular point of time
§ Examples: customers’ behavioral data at today’s update, companies’ account balances
at the end of the last year, patients’ medical records at the end of the current month, …

2. Time Series Data


§ One single object (product, country, sensor, ..) observed over multiple equally-spaced
time periods
§ Examples: quarterly Italian GDP of the last 10 years, weekly supermarket sales of the
previous year, yesterday’s hourly temperature measurements, …

© 2021 KNIME AG. All rights reserved. 14


Examples

Time series example 1


Numbers of Doctorates Awarded in US, annual data – Engineering Vs. Education
12000
Engineering
9633 9897 At a glance
10000 Education
9001
8470 Annual data
7749 7864 8032
7642 7578
8000 7186
Different
6426 «directions»
6000 6448 6561 6528
6227 6122 No big fluctuations
5288 5117
4000 4670 4803 4935 4791

2000

0
2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015

© 2021 KNIME AG. All rights reserved. 15


Examples

Time series example 2


Monthly carbon dioxide concentration (globally averaged over marine surface sites)
410

405 At a glance

Monthly basis data


400
Regular pattern
395
Constant
fluctuations
390
Average value
385 increases year by year

380
1 3 5 7 9 11 1 3 5 7 9 11 1 3 5 7 9 11 1 3 5 7 9 11 1 3 5 7 9 11
2012 2013 2014 2015 2016

© 2021 KNIME AG. All rights reserved. 16


Examples

Time series example 3


LinkedIn daily stock market closing price

At a glance

Daily basis data

Very irregular
dynamic

Many sudden
changes

© 2021 KNIME AG. All rights reserved. 17


Examples

Time series example 4


Number of photos uploaded on the Instagram every minute (regional sub-sample)

At a glance

Minute basis data

Almost regular daily


pattern but with
some anomalies
and spikes

© 2021 KNIME AG. All rights reserved. 18


Examples

Time series example 5


Acceleration detected by a smartphone sensors during a workout session (10 seconds)

At a glance

Milliseconds basis
data

Each sensor has its


own dynamics

© 2021 KNIME AG. All rights reserved. 19


Objectives

Main Objectives of Time Series Analysis


§ Summary description (graphical and numerical) of data point vs. time
§ Interpretation of specific series features (e.g. seasonality, trend, relationship
with other series)
§ Forecasting (e.g. predict the series values in 𝑡 + 1, 𝑡 + 2, … , 𝑡 + 𝑘)
§ Hypothesis testing and Simulation (comparing different scenarios)

© 2021 KNIME AG. All rights reserved. 20


Objectives

Once someone said: «Forecasting is the art of saying what will happen in the
future and then explaining why it didn’t»
§ Frequently true... history is full of examples of «bad forecasts», just like IBM Chairman’s famous
quote in 1943: “there is a world market for maybe five computers in the future.”
The reality is that forecasting is a really tough task, and you can do really bad, just
like in this cartoon..

But we can do definitely better


using quantitative methods.. and
common sense!

GOAL: Reduce uncertainty and


improve the accuracy of our
forecasts

© 2021 KNIME AG. All rights reserved. 21


Definition

General definition: “A time series is a collection of observations made


sequentially through time, whose dynamics is often characterized by
short/long period fluctuations (seasonality and cycles) and/or long period
direction (trend)”

Such observations may be denoted by 𝒀𝟏 , 𝒀𝟐 , 𝒀𝟑 , … 𝒀𝒕 , … , 𝒀𝑻 since data are


usually collected at discrete points in time
Observation at time t

§ The interval between observations can be any time interval (seconds, minute, hours, days, weeks,
months, quarters, years, etc.) and we assume that these time periods are equally spaced
§ One of the most distinctive characteristics of a time series is the mutual dependence between the
observations, generally called SERIAL CORRELATION OR AUTOCORRELATION

© 2021 KNIME AG. All rights reserved. 22


Task & Dataset
Electricity Consumption by the
Hour in Ireland
The Dataset: Electricity Consumption
Smart Meters to measure Electricity Usage

Task: Demand Prediction of


kW used in the next hour

§ Irish Smart Energy Trials § The original Dataset


§ http://www.seai.ie/News_Events/Press_Relea § ID by household/store in Ireland
ses/2012/Full_Data_from_National_Smart_M § Date&Time (Jul 2009 - Aug 2010)
eter_Trial_Published.html
§ kW used in the past half an hour
§ 6000 households & businesses From
Jul 2009 to Aug 2010

© 2021 KNIME AG. All rights reserved. 24


Task: Electricity Demand Prediction

A few, a bit fat, Time


Series of kW used by
similar* households in
Ireland
One big fat Time Many fine Time
Series of kW used Series of kW used
every hour in the every hour at each
whole Ireland household in
Ireland

*Similar electrically speaking

© 2021 KNIME AG. All rights reserved. 25


Data Processing: daily & weekly KPI
Daily KPI = % of kW used in <time window> over total kW in day

Early Early Late

Average of time series by cluster


Meter ID morning Morning Lunch Afternoon Afternoon Evening Night
06-09 09-12 12-15 15-18 18-21 21-24 24-06

1000 6 2 3 3 7 28 51
1001 5 22 16 24 23 5 5
1002 ... ... ... ... ... ... ...
Early morning Late afternoon

Cluster 11

Business days 9-5 Weekend

Cluster 18

KPI
Morning Evening

Cluster 0

© 2021 KNIME AG. All rights reserved. 26


Clustering Energy Consumption Data

§ Clustering in order to identify fewer groups with a particular behavior instead of


inspecting and modeling many individual behaviors. We model the energy
consumption of a prototype of one cluster.
§ Clustering by k-Means algorithm based on
§ energy consumption on business days and over the weekend
§ total energy consumption
§ yearly, monthly, weekly, etc. energy consumption
§ k-Means Algorithm
§ Based on Euclidean distance of numeric columns, and our data only contain numeric columns
§ Missing values need to be replaced, in our data by 0

© 2021 KNIME AG. All rights reserved. 27


Electricity KPIs on the KNIME Hub

https://kni.me/w/9pHnxeJUp8aueCJT

© 2021 KNIME AG. All rights reserved. 28


This Week‘s Challenge

§ Isolate, preprocess, and visualize data in cluster 26


§ Apply several techniques (e.g. Random Forest, ARIMA, LSTM)
to generate in-sample and out-of-sample forecasts
§ Evaluate the models and save them for forecasting comparison
§ Compare the accuracy of the models in predicting Electricity Usage
in cluster 26 in kW in the next week (168 hours)

© 2021 KNIME AG. All rights reserved. 29


Today‘s Challenge – Cluster 26

Business days
9-5

Weekend

Winter

Summer

© 2021 KNIME AG. All rights reserved. 30


This Week‘s Challenge – Final Results

§ Deploy the different techniques to generate out-of-sample


forecasts for the next week

© 2021 KNIME AG. All rights reserved. 31


Components

§ Encapsulates a functionality
as a KNIME workflow
§ E. g. execute Python script by a component
with a graphical UI
§ Function like regular nodes:
§ Start using by drag and drop from the EXAMPLES
Server/local directory
§ Configure in the component’s configuration dialog
§ Also available on the KNIME Hub

© 2021 KNIME AG. All rights reserved. 32


Components on the KNIME Hub

Components published by KNIME


and the KNIME community

hub.knime.com

© 2021 KNIME AG. All rights reserved. 33


Components on the KNIME Hub

Drag&Drop

hub.knime.com

© 2021 KNIME AG. All rights reserved. 34


Components

§ Instances of shared
components are linked to the
master and are therefore
write-protected
§ Editable after disconnecting
the link or by a double click
in the component editor

© 2021 KNIME AG. All rights reserved. 35


Components

§ Components can have dialogs


enabled by configuration nodes…

§ … and interactive views enabled


by widget nodes and JavaScript
based nodes

© 2021 KNIME AG. All rights reserved. 36


Time Series Components

§ Inspect, restore, and remove seasonality


§ Train and apply ARIMA models
§ Analyze residuals
§ And many more!

© 2021 KNIME AG. All rights reserved. 37


Component: Timestamp Alignment

§ Acquire continuously spaced data


§ In today’s example we verify a record exists for every hour
§ Otherwise create a missing value Output: Time series
with skipped skipped
sampling times

Input: Time series to check


for uniform sampling

© 2021 KNIME AG. All rights reserved. 38


Component: Aggregation Granularity

§ Extract granularities (year, month, hour, etc.) from a timestamp and aggregate
(sum, average, mode, etc.) data at the selected granularity
§ In today’s example we calculate the total energy consumption by hour, day, and
month
Output: Aggregated
time series

Input: Time
series to
aggregate

© 2021 KNIME AG. All rights reserved. 39


Descriptive Analytics
Load, Clean, and Explore
Time Series Properties: Main Elements

§ TREND § CYCLE
The general direction in which the series Long-term fluctuations that occur regularly in
is running during a long period the series A CYCLE is an oscillatory
A TREND exists when there is a long-term component (i.e. Upward or Downward
increase or decrease in the data. swings) which is repeated after a certain
It does not have to be necessarily linear number of years, so:
(could be exponential or others functional § May vary in length and usually lasts several
form). years (from 2 up to 20/30)
§ Difficult to detect, because it is often
confused with the trend component

© 2021 KNIME AG. All rights reserved. 41


Time Series Properties: Main Elements

§ SEASONAL EFFECTS § RESIDUAL


Short-term fluctuations that occur regularly – Whatever remains after the other components
often associated with months or quarters have been taken into account
The residual/error component is everything
A SEASONAL PATTERN exists when a that is not considered in previous components
series is influenced by seasonal factors (e.g.,
the quarter of the year, the month, day of the Typically, it is assumed to be the sum of a set
week). Seasonality is always of a fixed and of random factors (e.g. a white noise series)
known period. not relevant for describing the dynamics of the
series

© 2021 KNIME AG. All rights reserved. 42


Seasonal effect: additive seasonality

§ When the seasonality in Additive, the dynamics of the components are


independents from each other; for instance, an increase in the trend-cycle will
not cause an increase in the magnitude of seasonal dips
§ The difference of the trend and the raw data is roughly constant in similar
periods of time (months, quarters) irrespectively of the tendency of the trend
EXAMPLES OF ADDITIVE SEASONALITY

© 2021 KNIME AG. All rights reserved. 43


Seasonal effect: multiplicative seasonality

§ In the multiplicative model the amplitude of the seasonality increase (decrease)


with an increasing (decreasing) trend, therefore, on the contrary to the additive
case, the components are not independent from each other
§ When the variation in the seasonal pattern (or the variation around the trend-
cycle) appears to be proportional to the level of the time series, then a
multiplicative model is more appropriate.
EXAMPLES OF MULTIPLICATIVE SEASONALITY

© 2021 KNIME AG. All rights reserved. 44


Seasonal effect: frequency

According to the data granularity and to the type of seasonality you want to
model, it is important to consider the right seasonal frequency (i.e. how many
observations you have for every seasonal cycle)
§ No problem if your data points are years, quarters, months or weeks (in this case you will face only
annual seasonality), but if the frequency of observations is smaller than a week, things get more
complicated
§ For example, hourly data might have a daily seasonality (frequency=24), a weekly seasonality
(frequency=24×7=168) and an annual seasonality (frequency=24×365.25=8766)

Cycle type
Frequency
Hour Day Week Year
Annual 1
Data granularity

Quarterly 4 *Every year, on


Monthly 12 average, is made
Weekly 1 52.18 up of 365 days
and 6 hours à so
Daily 1 7 365.25 365.25 days and
Hourly 1 24 168 8766 365.25/7=52.18
Minutes 60 1440 10080 525960 weeks

© 2021 KNIME AG. All rights reserved. 45


Numerical and graphical description of Time Series

§ The first step in Time Series Analysis is to produce a detailed exploratory


analysis of the data to get some insights about the distribution of the series
over time
§ This part must be performed using both numerical descriptive analyses and
graphical analyses, such as:
§ Time plot
Graphical § Seasonal plot
descriptive § Box plot analysis
analyses § Scatterplots (Lag plots)
§ Plotting auto-correlation and cross-correlation functions

§ Sampling period evaluation (start, end, data points


features)
Numerical
§ Number of data available
descriptive
§ Missing value and outlier evaluation
analyses
§ Frequency distribution analysis
§ Summary descriptive statistics (overall and by season)

© 2021 KNIME AG. All rights reserved. 46


Graphical Analysis: Time Plot

§ The first chart in time series analysis is the TIME PLOT à the observations
are plotted against the time of observation, normally with consecutive
observations joined by straight lines
Example of TS Plot of Australian monthly wine sales Example of TS Plot of Air Passengers (monthly) series

Annual (additive) Annual


seasonality and (multiplicative)
upward trend seasonality and
upward trend

© 2021 KNIME AG. All rights reserved. 47


Graphical Analysis: Time Plot

§ Insights you can get just from a simple Time plot


§ Is there a trend? Could it be linear or not?
§ Is there a seasonality effect?
§ Are there any long term cycles?
§ Are there any sharp changes in behaviour? Can such changes be explained?
§ Are there any missing values or “gap” in the series?
§ Are there any outliers, i.e. observations that differ greatly from the general pattern?
§ Is there any turning point/changing trend?
Series with gaps Series with a turning point Series with an outlier

© 2021 KNIME AG. All rights reserved. 48


Graphical Analysis: Time Plot

§ The TIME PLOT is very useful in cases where the series shows a very
constant/simple dynamic (strong trend and strong seasonality), but in other
cases could be difficult to draw clear conclusions
Consumer Cost Index Oxygen saturation

§ Other graphical analyses and summary statistics could


improve/extend the insights given by the simple time plot!

© 2021 KNIME AG. All rights reserved. 49


Graphical Analysis: Seasonal Plot

§ Produce the Seasonal plot of the Time series in order to analyze more in detail
the seasonal component (and possible changes in seasonality over time)

© 2021 KNIME AG. All rights reserved. 50


Granularity and Line Plots

§ Show time series by hour, day, and month in line plots


§ Identify daily, weekly, and yearly seasonality

© 2021 KNIME AG. All rights reserved. 51


Graphical Analysis: Box Plot

Create the conditional Box plot of the Time series in order to deeply understand
the distribution of data in the same period of each seasons and focusing on specific
aspects such as outliers, skewness, variability,…

© 2021 KNIME AG. All rights reserved. 52


Conditional Box Plot

§ Inspect the distribution of energy consumption hour by hour

Category
column

Time series
column

© 2021 KNIME AG. All rights reserved. 53


Graphical Analysis: Lag plot

In time series analysis it’s important to analyze the correlation between the lagged
values of a time series (autocorrelation): the lag plot is a bivariate analysis,
consisting in a simple scatter plot of the values of the target variable in t vs. the
values of the same variable in t-k; focusing on the correlation with the first lag (t-1)
you can see from the plot below that there is a strong linear relation between the
values in t and the values in t-1
AirPassengers

AirPassengers (Lag 1)

© 2021 KNIME AG. All rights reserved. 54


Lag Column Node

§ Append past values as new columns


§ Shift cells l (lag interval) steps up
§ Duplicate the lag column L (lag value) times.
In each column the rows are shifted l, 2*l, .., L*l steps up

© 2021 KNIME AG. All rights reserved. 55


Numerical analysis: Auto Correlation Function (and ACF plot)

In order to go deeper inside the autocorrelation structure of the time series, you
can create the Auto Correlation Function plot (ACF plot), also called correlogram:
in this chart you can read the linear correlation index between the values in t and
all the possible lags (t-1, t-2, …, t-k); the chart below shows all the correlations up
to lag number 48

© 2021 KNIME AG. All rights reserved. 56


Numerical analysis: Auto Correlation Function (and ACF plot)

Together with the ACF, sometimes it is useful to analyze also the Partial
Autocorrelation Function
The ACF plot shows the autocorrelations which measure the linear relationship
between 𝑦& and 𝑦&'( for different values of k but consider that:
§ if 𝑦! and 𝑦!"# are correlated, then 𝑦!"# and 𝑦!"$ must also be correlated
§ But then 𝑦! and 𝑦!"$ might be correlated, simply because they are both connected to 𝑦!"#
§ à The Partial Autocorrelation Function (PACF) consider the linear relationship between 𝑦! and
𝑦!"% after removing the effects of other time lags 1, 2, 3, … , 𝑘 − 1

© 2021 KNIME AG. All rights reserved. 57


Numerical analysis: Descriptive statistics

From a numerical point of view, it’s important to produce statistics (total sample
and split by seasonal periods) of the time series, in order to have a more precise
idea of: number of valid data points vs. missing data, central tendency measures,
dispersions measures, percentiles, confidence intervals of the means, etc.
Time Series Month N obs Missing Mean Std. Dev Min Max 95% LCL 95% UCL
AirPassengers 1 12 0 241.8 101.0 112 417 177.6 305.9
AirPassengers 2 12 0 235.0 89.6 118 391 178.1 291.9
AirPassengers 3 12 0 270.2 100.6 132 419 206.3 334.1
AirPassengers 4 12 0 267.1 107.4 129 461 198.9 335.3
AirPassengers 5 12 0 271.8 114.7 121 472 198.9 344.7
AirPassengers 6 12 0 311.7 134.2 135 535 226.4 396.9
AirPassengers 7 12 0 351.3 156.8 148 622 251.7 451.0
AirPassengers 8 12 0 351.1 155.8 148 606 252.1 450.1
AirPassengers 9 12 0 302.4 124.0 136 508 223.7 381.2
AirPassengers 10 12 0 266.6 110.7 119 461 196.2 336.9
AirPassengers 11 12 0 232.8 95.2 104 390 172.4 293.3
AirPassengers 12 12 0 261.8 103.1 118 432 196.3 327.3
AirPassengers Total 144 0 280.3 120.0 104 622 260.5 300.1

© 2021 KNIME AG. All rights reserved. 58


Exercise 1: Loading and Exploring Data

§ Load in the Energy Usage Data


§ Perform preprocessing:
§ Convert string to Date&Time
§ Filter out unnecessary columns
§ Fill skipped sampling times with missing values
§ Handle missing values
§ Calculate hourly, daily, and monthly total energy
consumption
§ Plot the hourly, daily, and monthly
totals in line plots

© 2021 KNIME AG. All rights reserved. 59


Review of Installation Requirements

§ KNIME v4.02
§ Python Environment
§ StatsModels
§ Keras=2.2.4 & TensorFlow=1.8.0 hp5=
§ KNIME Python Integration
§ KNIME Deep Learning Keras Integration

© 2021 KNIME AG. All rights reserved. 60


KNIME Time Series Analysis
Course - Session 2
KNIME AG

61
Agenda

1.
2.
3.
4. Descriptive Analytics: Non-stationarity, Seasonality, Trend
5.
6.
7.
8.
9.
10.
11.

© 2021 KNIME AG. All rights reserved. 62


Exercise 1: Loading and Exploring Data

§ Load in the Energy Usage Data


§ Perform preprocessing:
§ Convert string to Date&Time
§ Filter out unnecessary columns
§ Fill skipped sampling times with missing values
§ Handle missing values
§ Calculate hourly, daily, and monthly total energy
consumption
§ Plot the hourly, daily, and monthly
totals in line plots

© 2021 KNIME AG. All rights reserved. 63


Descriptive Analytics
Stationarity, Seasonality, Trend
Stationarity

A time series can be defined as “stationary” when its properties does not depend
on the time at which the series is observed, so that:
§ the values oscillate frequently around the mean, independently from time
§ the variance of the fluctuations remains constant across time
§ the autocorrelation structure is constant over time and no periodic fluctuations exist
So, a time series that shows trend or seasonality is not stationary

Stationary Time Series example Non-Stationary Time Series example 1 Non-Stationary Time Series example 2

© 2021 KNIME AG. All rights reserved. 65


Stationarity

Typical examples of non-stationary series are all series that exhibit a deterministic
trend (i.e. y) = α + β . t + ε) ) or the so-called “Random Walk”

Random Walk (without drift) à y) = y)'* + ε) (where ε) is white noise)

A random walk model is very widely used for non-stationary data, particularly
financial and economic data. Random Walk Example

§ Random walks typically have:


§ long periods of apparent trends up or down
§ sudden and unpredictable changes in direction
§ variance and autocorrelation that depends on time!

© 2021 KNIME AG. All rights reserved. 66


Stationarity

Besides looking at the time plot of the data, the ACF plot is also useful for
identifying non-stationary TS:
à for a stationary time series, the ACF will drop to zero (i.e. within confidence
bounds) relatively quickly, while the ACF of non-stationary data decreases slowly

Stationary Time Series example Non-Stationary Time Series example 1 (random walk!)

© 2021 KNIME AG. All rights reserved. 67


Differencing

§ One way to make a time series stationary is to compute the differences between
consecutive observations à This is known as DIFFERENCING
§ Differencing can help stabilize the mean of a time series by removing changes in the level of a time
series, and so eliminating trend (and also seasonality, using a specific differencing order)
§ The Order of Integration for a Time Series, denoted I(d), reports the minimum number of differences
(d) required to obtain a stationary series (note: I(0) à it means the series is stationary!)
§ Transformations such as logarithms can help to stabilize the variance of a time series

Differenced
Time Series (first order)
𝑦! 𝑦!" = 𝑦! − 𝑦!#$

1478 – 983 = 495

© 2021 KNIME AG. All rights reserved. 68


Differencing

Example: use differencing to make stationary a non-stationary series

Non-Stationary Time Series example 1 (Random Differenced Time Series (first order)
Walk)

𝑇𝑆2! − 𝑇𝑆2!"#

No significative
autocorrelation
exists à
applying first
differences to
a random walk
generates a
white noise

© 2021 KNIME AG. All rights reserved. 69


Differencing

Occasionally the differenced data will not appear stationary and it may be
necessary to difference the data a second time to obtain a stationary series
+
(𝑦&++ = 𝑦&+ − 𝑦&'* = 𝑦& − 𝑦&'* − 𝑦&'* − 𝑦&', )*
Serie originaria Serie differenziata
Original Time Series: non-stationary (mean and variance) First Order Differencing: non-stationary (mean and variance)
1 300 2 20

250 15

200 10

150 5

100 0
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106

50 -5

0 -10
1 6 11 16 21 26 31 36 41 46 51 56 61 66 71 76 81 86 91 96 101 106
Trend esponenziale differenziato due volte Trend esponenziale , logaritmo, due differenze
Second Order Diff.: stationary in mean, but not in variance Double Differencing applied to Log(Series): stationary series
3 25 4 0,4

20
0,3
15
0,2
10

5 0,1

0
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101 105 0
-5
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65 69 73 77 81 85 89 93 97 101 105
-10 -0,1

-15
-0,2
-20

-25 -0,3

* it’s almost never necessary to go beyond second-order differences

© 2021 KNIME AG. All rights reserved. 70


Differencing

A seasonal difference is the difference between an observation and the


corresponding observation from the previous (seasonal) cycle
𝑦&+ = 𝑦& − 𝑦&'.
Where F is the (seasonal) cycle frequency
à The seasonal differencing removes strong and stable seasonality pattern
(and transform into a white noise the so called “seasonal random walk”,
i.e. y) = y)'- + ε) )

Consider that:
§ Sometimes it’s needed to apply both “simple” first differencing and seasonal differencing
in order to obtain a stationary series
§ It makes no difference which is done first—the result will be the same
§ However, if the data have a strong seasonal pattern, it’s recommended that seasonal differencing be
done first because sometimes the resulting series will be stationary and there will be no need for a
further non-seasonal differencing

© 2021 KNIME AG. All rights reserved. 71


Differencing

Consider the following example where a set of differencing has been applied to
“Monthly Australian overseas visitors” TS
Original Time Series (𝒚𝒕) Seasonal Differencing (𝒚𝒕 − 𝒚𝒕"𝟏𝟐)
1 2

Applying first differencing to seasonal Use log trasformation in order to stabilize the variance
differenced series ( 𝒍𝒐𝒈(𝒚𝒕) − 𝒍𝒐𝒈(𝒚𝒕"𝟏) − 𝒍𝒐𝒈(𝒚𝒕"𝟏𝟐) − 𝒍𝒐𝒈(𝒚𝒕"𝟏𝟑) )
( 𝒚𝒕 − 𝒚𝒕"𝟏𝟐 − 𝒚𝒕"𝟏 − 𝒚𝒕"𝟏𝟑 )
3 4

The series
now appears to
be stationary

© 2021 KNIME AG. All rights reserved. 72


Differencing

Same example of the previous slide, but changing the differencing process order
à the final result is…
Original Time Series (𝒚𝒕) First Order Differencing (𝒚𝒕 − 𝒚𝒕"𝟏)
1 2

First Order Diff. after log transformation Applying seasonal differencing to first order
(𝒍𝒐𝒈(𝒚𝒕) − 𝒍𝒐𝒈(𝒚𝒕"𝟏)) differenced of log series
( 𝒍𝒐𝒈(𝒚𝒕) − 𝒍𝒐𝒈(𝒚𝒕"𝟏) − 𝒍𝒐𝒈(𝒚𝒕"𝟏𝟐) − 𝒍𝒐𝒈(𝒚𝒕"𝟏𝟑) )
3 4

The series
is now
stationary

© 2021 KNIME AG. All rights reserved. 73


Component: Inspect Seasonality

§ Calculates (partial) autocorrelation with lagged values


§ In today’s example we inspect daily seasonality in the energy consumption data

Output: Autocorrelation plot

Input: Time series to Output: Lag with maximum correlation


inspect seasonality available in the flow variable output. Local
maximums are listed in the data output.

Output: Partial
autocorrelation plot

© 2021 KNIME AG. All rights reserved. 74


Component: Remove Seasonality

§ Removes seasonality by differencing at the selected lag


§ In today’s example we remove daily seasonality from the energy consumption
data
Output: Differenced
time series
Input: Lag for
differencing

Output: Seasonality
Input: Time series column
with seasonality

© 2021 KNIME AG. All rights reserved. 75


Component: Decompose Signal

§ Extract trend, first and second seasonality, and residual from time series and
show the progress of time series in line plots and ACF plots

Output: Signal and columns for


trend, seasonality, and residual

Input: Signal to
decompose

Output: Line
plots and ACF
plots at the
different stages
of decomposing

© 2021 KNIME AG. All rights reserved. 76


Numeric Errors: Formulas
Error Metric Formula Notes

R-squared ∑.,-$(𝑓 𝑥, − 𝑦, )/ Universal range: the closer to 1 the


1− better
∑.,-$(𝑦, −𝑦)/
.
Mean absolute error (MAE) 1 Equal weights to all distances
+ |𝑓 𝑥, − 𝑦, | Same unit as the target column
𝑛
,-$
.
Mean squared error (MSE) 1 Common loss function
+(𝑓 𝑥, − 𝑦, )/
𝑛
,-$

Root mean squared error (RMSE) . Weights big differences more


1 Same unit as the target column
+(𝑓 𝑥, − 𝑦, )/
𝑛
,-$

.
Mean signed difference 1 Only informative about the direction
+ 𝑓 𝑥, − 𝑦, of the error
𝑛
,-$
.
Mean absolute percentage error 1 |𝑓 𝑥, − 𝑦, | Requires non-zero target column
(MAPE) + values
𝑛 |𝑦, |
,-$

© 2021 KNIME AG. All rights reserved. 77


Numeric Scorer Node

Evaluate numeric predictions


§ Compare actual target column values to
predicted values to evaluate goodness of fit.
§ Report R2, RMSE, MAPE, etc.

© 2021 KNIME AG. All rights reserved. 78


Partitioning for Time Series

§ When Partitioning data for training


a Time Series model it is important
your training data comes before
your test data chronologically.
§ This will mirror how the model is used in
deployment, always forecasting the future.
§ To do this make sure your data is properly
sorted and partition with the “Take from top”
option. In the KNIME node.

© 2021 KNIME AG. All rights reserved. 79


In-Sample vs. Out-sample

Out-Sample Static In-Sample Static

§ Data used to train is the sample data


§ Forecasts on the sample data are called In-Sample Forecasts
§ Forecasts on other data are called Out-Sample Forecasts
§ Either Forecast is called Dynamic if it uses prior Forecasts as its inputs,
if real values are used it is called Static

© 2021 KNIME AG. All rights reserved. 80


Model Evaluation

§ Assess the expected forecast accuracy of your model by comparing actual and
predicted time series
§ Training data vs. in-sample predictions
§ Test data vs. out-of-sample predictions

Visual comparison in a line plot: Numeric comparison by error metrics:

© 2021 KNIME AG. All rights reserved. 81


Exercise 2: Inspecting and Removing Seasonality

§ Use ACF plots to inspect seasonality


from energy consumption data
§ Remove seasonality and check
again the ACF plot
§ Compare hourly energy consumption
values before and after removing
seasonality
§ Optional: split energy consumption data
into a trend, seasonality, and residual

© 2021 KNIME AG. All rights reserved. 82


KNIME Time Series Analysis
Course - Session 3
KNIME AG

83
Agenda

1.
2.
3.
4.
5. Quantitative Forecasting: Classical techniques
6. ARIMA Models: ARIMA(p,d,q)
7.
8.
9.
10.
11.

© 2021 KNIME AG. All rights reserved. 84


Exercise 2: Inspecting and Removing Seasonality

§ Use ACF plots to inspect seasonality


from energy consumption data
§ Remove seasonality and check again
the ACF plot
§ Compare hourly energy consumption
values before and after removing
seasonality
§ Optional: split energy consumption
data into a trend, seasonality, and
residual

© 2021 KNIME AG. All rights reserved. 85


Quantitative Forecasting
Classical Techniques
Qualitative vs. Quantitative

The approaches to forecasting are essentially two: qualitative approach and


quantitative approach

§ Qualitative forecasting methods are adopted when historical data are not
available (e.g. estimate the revenues of a new company that clearly doesn’t
have any data available). They are highly subjective methods.

§ Quantitative forecasting techniques are based on historical quantitative data;


→the analyst, starting from those data, tries to understand the underlying
structure of the phenomenon of interest and then to use the same historical
data for forecasting purposes

Our focus

© 2021 KNIME AG. All rights reserved. 87


Quantitative forecasting

The basis for quantitative analysis of time series is the assumption that there are
factors that influenced the dynamics of the series in the past and these factors
continue to bring similar effects in also in the future

Main methods used in Quantitative Forecasting:


1. Classical Time Series Analysis: analysis and forecasts are based on identification of
structural components, like trend and seasonality, and on the study of the serial
correlation à univariate time series analysis
2. Explanatory models: analysis and forecasts are based both on past observations of the
series itself and also on the relation with other possible predictors à multivariate time
series analysis
3. Machine learning models: Different Artificial Neural Networks algorithms used to
forecast time series (both in univariate or multivariate fashion)

© 2021 KNIME AG. All rights reserved. 88


Classical Time Series Analysis

The main tools used in the Classical Time Series Analysis are:
§ Classical Decomposition: considers the time series as the overlap of several
elementary components (i.e. trend, cycle, seasonality, error)
§ Exponential Smoothing: method based on the weighting of past observations,
taking into account the overlap of some key time series components (trend and
seasonality)
§ ARIMA (AutoRegressive Integrated Moving Average): class of statistical models
that aim to treat the correlation between values of the series at different points in
time using a regression-like approach and controlling for seasonality

© 2021 KNIME AG. All rights reserved. 89


Which model?

The choice of the most appropriate method of forecasting


is influenced by a number of factors, that are:
§ Forecast horizon, in relation to TSA objectives
§ Type/amount of available data
§ Expected forecastability
§ Required readability of the results
§ Number of series to forecast
§ Deployment frequency of the models
§ Development complexity
§ Development costs

© 2021 KNIME AG. All rights reserved. 90


Naïve Prediction

§ Predict values by the most recent known value


𝑦2 /01|/ = 𝑦/ ,
where 𝑦/ is the most recent known value and h=1,2,3
§ Best predictor for true random walk data

© 2021 KNIME AG. All rights reserved. 91


Naïve seasonal Prediction

§ Predict values by the most recent known value


𝑦2 /01|/ = 𝑦/01'3(50*) ,

§ where m is the seasonal period, and k is the integer part of (h−1)/m (i.e., the
number of complete years in the forecast period prior to time T+h).
§ For example, with hourly data, the forecast for all future 6pm values is equal to
the last observed 6pm value.
§ Best predictor for seasonal random walk data

© 2021 KNIME AG. All rights reserved. 92


Interpretation issues

IMPORTANT: Remember that quantitative data ARE NOT JUST NUMBERS..


.. they have a story to tell, especially if your data are time series!
So.. always try to understand what’s going on from a logical/business point
of view: try to give an interpretation to the observed dynamics!

Example 1:
can you draw
something useful
looking at this
series?

© 2021 KNIME AG. All rights reserved. 93


ARIMA Models
ARIMA(p,d,q)
Goal of this Section

1. Introduction to ARIMA
2. ARIMA Models
3. ARIMA Model selection
4. ARIMAX

© 2021 KNIME AG. All rights reserved. 95


Exponential Smoothing vs. ARIMA

While exponential smoothing models are based on a description of level, trend and sea-
sonality in the data, ARIMA models aim to describe the autocorrelations in the data

REMINDER: Just as correlation measures the amount of a linear relationship


between two variables, AUTOCORRELATION measures the linear relationship
between lagged values of a time series
§ There are several autocorrelation coefficients, depending on the lag length
§ 𝑟* measures the relationship between 𝑦& and 𝑦&'* , 𝑟, measures the relationship
between 𝑦& and 𝑦&', , and so on

Before starting with ARIMA models is useful to give a look to a preliminary concept:
what is a linear regression model?

© 2021 KNIME AG. All rights reserved. 96


ARIMA Models: General framework

An ARIMA model is a numerical expression indicating how the observations of a target


variable are statistically correlated with past observations of the same variable

§ ARIMA models are, in theory, the most general class of models for forecasting a time series which
can be “stationarized” by transformations such as differencing and lagging
§ The easiest way to think of ARIMA models is as fine-tuned versions of random-walk models: the fine-
tuning consists of adding lags of the differenced series and/or lags of the forecast errors to the
prediction equation, as needed to remove any remains of autocorrelation from the forecast errors

In an ARIMA model, in its most complete formulation, are considered:


§ An Autoregressive (AR) component, seasonal and not
§ A Moving Average (MA) component, seasonal and not
§ The order of Integration (I) of the series

That’s why we call it ARIMA (Autoregressive Integrated Moving Average)

© 2021 KNIME AG. All rights reserved. 97


ARIMA Models: General framework

The most common notation used for ARIMA models is:

𝑨𝑹𝑰𝑴𝑨(𝒑, 𝒅, 𝒒) (𝑷, 𝑫, 𝑸)𝒔

where:
§ p is the number of autoregressive terms
§ d is the number of non-seasonal differences
§ q is the number of lagged forecast errors in the equation
§ P is the number of seasonal autoregressive terms
§ D is the number of seasonal differences
§ Q is the number of seasonal lagged forecast errors in the equation
§ s is the seasonal period (cycle frequency using R terminology)

à In the next slides we will explain each single component of ARIMA models!

© 2021 KNIME AG. All rights reserved. 98


ARIMA Models: Autoregressive part (AR)

In a multiple regression model, we predict the target variable Y using a linear


combination of independent variables (predictors)à In an autoregression model,
we forecast the variable of interest using a linear combination of past values of the
variable itself

The term autoregression indicates that it is a regression of the variable against itself
§ An Autoregressive model of order 𝒑, denoted 𝐴𝑅(𝑝) model, can be written as

𝑦& = 𝑐 + 𝜙* 𝑦&'* + 𝜙, 𝑦&', + ⋯ +𝜙7 𝑦&'7 +𝜀&


Where:
§ 𝑦𝑡 = dependent variable
§ 𝑦!"# , 𝑦!"$ , … , 𝑦!"& = independent variables (i.e. lagged values of 𝑦𝑡 as predictors)
§ f1, f2, …, fp = regression coefficients
§ 𝜀! = error term (must be white noise)

© 2021 KNIME AG. All rights reserved. 99


ARIMA Models: Autoregressive part (AR)

Autoregressive simulated process examples:


AR(1) process example (f1=0.5 ) AR(2) process example (f1=0.5 , f2=0.2 )

Consider that, in case of AR(1) model:


§ When 𝜙# = 0, y! is a white noise
§ When 𝜙# = 1 and 𝑐 = 0, 𝑦! is a random walk
§ In order to have a stationary series the following condition must be true: −1 < 𝜙# < 1

© 2021 KNIME AG. All rights reserved. 100


ARIMA Models: Moving Average part (MA)

Rather than use past values of the forecast variable in a regression, a Moving
Average model uses past forecast errors in a regression-like model

In general, a moving average process of order q, MA (q), is defined as:

𝑦& = 𝑐 + 𝜀& + 𝜃* 𝜀&'* + 𝜃, 𝜀&', + ⋯ +𝜃8 𝜀&'8

The lagged values of 𝜀& are not actually observed, so it is not a standard regression

Moving average models should not be confused with moving average smoothing
(the process used in classical decomposition in order to obtain the trend
component)à A moving average model is used for forecasting future values while
moving average smoothing is used for estimating the trend-cycle of past values

© 2021 KNIME AG. All rights reserved. 101


ARIMA Models: Moving Average part (MA)

Moving Average simulated process examples:

MA(1) process example (𝜃&=0.7) MA(2) process example (𝜃&=0.8 , 𝜃'=0.5)

§ Looking just the time plot it’s hard to distinguish between an


AR process and a MA process!

© 2021 KNIME AG. All rights reserved. 102


ARIMA Models: ARMA and ARIMA

If we combine autoregression and a moving average model,


we obtain an ARMA(p,q) model:

𝑦& = 𝑐 +𝜙* 𝑦&'* +𝜙, 𝑦&', + ⋯ +𝜙7 𝑦&'7 + 𝜃* 𝜀&'* + 𝜃, 𝜀&', + ⋯ +𝜃8 𝜀&'8 + 𝜀&
Autoregressive component of order p Moving Average component of order q

To use an ARMA model, the series must be STATIONARY!


§ If the series is NOT stationary, before estimating and ARMA model, we need to apply one or more
differences in order to make the series stationary: this is the integration process, called I(d), where d=
number of differences needed to get stationarity
§ If we model the integrated series using an ARMA model, we get an ARIMA (p,d,q) model where
p=order of the autoregressive part; d=order of integration; q= order of the moving average part

© 2021 KNIME AG. All rights reserved. 103


ARIMA Models: ARMA and ARIMA
ARIMA simulated process examples

ARMA(2,1) process example, equal to ARIMA(2,0,1) ARIMA(2,1,1) process example (f1=0.5, f2=0.4, 𝜃&=0.8 )
(f1=0.5, f2=0.4, 𝜃&=0.8 )

© 2021 KNIME AG. All rights reserved. 104


ARIMA Models: Model identification

General rules for model indentification based on ACF and PACF plots:

The data may follow an 𝑨𝑹𝑰𝑴𝑨(𝒑, 𝒅, 𝟎) model if the ACF and PACF plots of the
differenced data show the following patterns:
§ the ACF is exponentially decaying or sinusoidal
§ there is a significant spike at lags p in PACF, but none beyond lag p

The data may follow an 𝑨𝑹𝑰𝑴𝑨(𝟎, 𝒅, 𝒒) model if the ACF and PACF plots of the
differenced data show the following patterns:
§ the PACF is exponentially decaying or sinusoidal
§ there is a significant spike at lags q in ACF, but none beyond lag q

à For a general 𝑨𝑹𝑰𝑴𝑨(𝒑, 𝒅, 𝒒) model (with both p and q > 1) both ACF and PACF plots show
exponential or sinusoidal decay and it’s more difficult to understand the structure of the model

© 2021 KNIME AG. All rights reserved. 105


ARIMA Models: Model identification
Specifically:

TIME SERIES ACF PACF

Exponential decay: Peak at lag 1, then decays to


From positive side or zero: positive peak if the AR
AR(1)
alternating (depending on the coefficient is positive, negative
sign of the AR coefficient) otherwise

Exponential decay or alternate Peaks at lags 1 up to p


AR(p)
sinusoidal decay

Peak at lag 1, then decays to Exponential decay:


zero: positive peak if the MA From positive side or alternating
MA(1)
coefficient is positive, negative (depending on the sign of the MA
otherwise coefficient)

Exponential decay or alternate


MA(q) Peaks at lags 1 up to q
sinusoidal decay

© 2021 KNIME AG. All rights reserved. 106


ARIMA Models: Model identification

𝑨𝑹 𝟐 : Φ1>0, Φ2>0

𝑨𝑹 𝟐 : Φ1<0, Φ2>0

© 2021 KNIME AG. All rights reserved. 107


ARIMA Models: Model identification

𝑴𝑨 𝟏 : θ1>0

𝑴𝑨 𝟏 : θ1<0

© 2021 KNIME AG. All rights reserved. 108


ARIMAX Models: Adding explicative variables

A special case of ARIMA models allows you to generate forecasts that depend on
both the historical data of the target time series (𝑌) and on other exogenous
variables (𝑋5 )à we call them ARIMAX models
§ This is not possible with other classical time series analysis techniques (e.g. ETS), where the
prediction depends only on past observations of the series itself
§ The advantage of ARIMAX models, therefore consists in the possibility to include additional
explanatory variables in addition to the target dependent variable lags

𝑌! = 𝑐 + ∅#𝑌!"# + … + ∅, 𝑌!", + 𝜃#𝜀!"# + … + 𝜃- 𝜀!"- + 𝛽#𝑋# + 𝛽.𝑋. + … + 𝛽/ 𝑋/ + 𝜀!

AUTOREGRESSIVE MOVING AVERAGE EXPLICATIVE VARIABLES ERROR TERM


the forecast depends the forecast depends on Independent variables that White noise (i.i.d, 0
on past observations the past errors (the provide additional information, mean and constant
(weighted with the difference between the useful to improve prediction: variance)
regression observed value and you can add also LAGGED
coefficients) estimated value) effect of explicative variables!!

© 2021 KNIME AG. All rights reserved. 109


ARIMA Models: Seasonal ARIMA

A seasonal ARIMA model is formed by including additional seasonal terms


in the ARIMA models we have seen so far
𝑨𝑹𝑰𝑴𝑨(𝒑, 𝒅, 𝒒) (𝑷, 𝑫, 𝑸)𝒔

where s = number of periods per season (i.e. the frequency of seasonal cycle)
We use uppercase notation for the seasonal parts of the model, and lowercase
notation for the non-seasonal parts of the model

à As usual, d / D are the number of differences/seasonal differences necessary


to make the series stationary

© 2021 KNIME AG. All rights reserved. 110


ARIMA Models: Seasonal ARIMA identification

The seasonal part of an AR or MA model will be seen in the seasonal lags of the
PACF and ACF

For example, an 𝐴𝑅𝐼𝑀𝐴(0,0,0)(0,0,1)*, model will show:


§ A spike at lag 12 in the ACF but no other significant spikes
§ The PACF will show exponential decay in the seasonal lags; that is, at lags 12, 24, 36, …

Similarly, an 𝐴𝑅𝐼𝑀𝐴(0,0,0)(1,0,0)*, model will show: Example of 𝐴𝑅𝐼𝑀𝐴(0,0,0)(1,0,0)&' process

§ Exponential decay in the seasonal lags of the ACF


§ A single significant spike at lag 12 in the PACF

© 2021 KNIME AG. All rights reserved. 111


ARIMA Models: estimation and AIC
Parameters estimation
In order to estimate an ARIMA model, normally it’s used the Maximum Likelihood Estimation (MLE)

This technique finds the values of the parameters which maximize the probability of obtaining the
data that we have observed à For given values of (𝒑, 𝒅, 𝒒) (𝑷, 𝑫, 𝑸) (i.e. model order) the algorithm will
try to maximize the log likelihood when finding parameter estimates

ARIMA model order


A commonly used criteria to compare different ARIMA models (i.e. with different values for (𝒑,𝒒) (𝑷,𝑸) but
fixed 𝒅 , 𝑫 ) and to determine the optimal ARIMA order, is the Akaike Information Criterion (AIC)
𝐀𝐈𝐂 = −2log (𝐿𝑖𝑘𝑒𝑙𝑖ℎ𝑜𝑜𝑑) + 2(𝑝)

§ where p is the number of estimated parameters in the model


§ AIC is a goodness of fit measure
§ The best ARIMA model is that with the lower AIC à most of automatic model selection method
(e.g auto.arima in R) uses the AIC for determining the optimal ARIMA model order

© 2021 KNIME AG. All rights reserved. 112


ARIMA Model selection criteria: Manual procedure (outline)

§ After preliminary analysis (and time series transformations, if needed),


follow these steps:

(1) Obtain stationary series using differencing

(2) Figure out possible order(s) for the


model looking at ACF (and PACF) plot

(3) Compare models from different point of


view (goodness of fit, accuracy, bias, …)

(4) Examine the residuals of the best model

© 2021 KNIME AG. All rights reserved. 113


ARIMA Model selection criteria: Manual procedure (details)

After preliminary analysis (and time series transformations, if needed),


follow these steps:
1. If the series is not stationary, use differencing (simple and/or seasonal) in order to obtain a
stationary series à together with graphical analysis, there are specific statistical tests (e.g. ADF)
useful to understand if the series is stationary
2. Examine the ACF/PACF of the stationary series and try to obtain an idea about residual
structure of correlation à Is an AR(p) / MA(q) model appropriate or you need more complex
model? Do you need to model the seasonality using seasonal autoregressive lags? It is frequent
that you need to consider more candidate models to test
3. Try your chosen model(s)*, and use different metrics to compare the performance:
§ Compare goodness of fit using AIC
§ Compare accuracy using measures like MAPE (in-sample and out-of-sample!)
§ Model complexity (simple is better!)
4. Finally, check the residuals from your chosen model by plotting the ACF of the residuals and doing
some test on the residuals (e.g. Ljung-Box test of autocorrelation) à they must be white noise
when the model is ok!
* Always consider slight variations of models selected in point 2: e.g. vary one or both p and q from current model by 1

© 2021 KNIME AG. All rights reserved. 114


Component: ARIMA Learner

§ Learns ARIMA model of specified orders on selected target column.

Input: Time
series,
specified Output: ARIMA model
orders,
estimation
method

Output: Model
performance statistics

Output: Model
residuals

© 2021 KNIME AG. All rights reserved. 115


Component: ARIMA Predictor

§ Generates number of forecasts set in configuration and in-sample predictions


based on range used in training
§ Checking the dynamic box will use predicted values for in-sample prediction

Input:
ARIMA Output:
Model Forecasted
values and their
standard errors

Predict
differenced
(linear) or original
(level) time series
if I > 0 Output: In-sample
predictions

© 2021 KNIME AG. All rights reserved. 116


Component: Auto ARIMA Learner

§ Creates all combinations of ARIMA Models up to specified Orders.


§ Select best model based on either AIC or BIC.
! Can take a long time to execute due to brute force approach.

Output: ARIMA model

Input: Time
series
Output: Model
performance statistics
and the best model

Output: Model
residuals

© 2021 KNIME AG. All rights reserved. 117


Component: Analyze ARIMA Residuals

§ Inspect ACF plot, residuals plot, Ljung-Box test statistics, and normality
measures → are residuals stationary and normally distributed?

Input: ARIMA
residuals

Output: ACF plot of Output: Residuals


residuals, LB-test plot, normality
statistics measures

© 2021 KNIME AG. All rights reserved. 118


ARIMA Performance Comparison

§ (2,1,1) vs (1,0,0) vs (0,1,0)

ARIMA(p,d,q) R^2 AIC MAPE RMSE

ARIMA(2,1,1) 0.798 25,899 6.073 0.870

ARIMA(1,0,0) 0.808 25,405 5.466 0.871

ARIMA(0,1,0) 0.798 25,924 6.048 0.871

© 2021 KNIME AG. All rights reserved. 119


Exercise 3: ARIMA Models

§ Train a model with both the ARIMA


Learner and Auto ARIMA Learner.
§ Generate a Forecast for each model
using the ARIMA Predictor.
§ Score your forecasts.
§ Analyze ARIMA residuals.

© 2021 KNIME AG. All rights reserved. 120


KNIME Time Series Analysis
Course - Session 4
KNIME AG

121
Agenda

1.
2.
3.
4.
5.
6.
7. Machine Learning based Models
8. Hyperparameter Optimization
9. Quick Look at LSTM Networks
10. Example of Time Series Analysis on Spark
11.

© 2021 KNIME AG. All rights reserved. 122


Exercise 3: ARIMA Models

§ Train a model with both the ARIMA


Learner and Auto ARIMA Learner.
§ Generate a Forecast for each model
using the ARIMA Predictor.
§ Score your forecasts.
§ Analyze ARIMA residuals.

© 2021 KNIME AG. All rights reserved. 123


Machine Learning based Models
Lag Column + Regressions
Using Machine Learning Techniques

§ Use Lag Column Node(s) to


create features
§ Lagged Columns for input
§ Original Column for target
§ When Partitioning make sure
data is sorted and take from top

© 2021 KNIME AG. All rights reserved. 125


Useful Models on lagged inputs

§ Regression Trees and Forests


§ Linear and Polynomial
Regression
§ Deep Learning
§ Options with Spark, H2O,
XGBoost, Keras, and
TensorFlow

© 2021 KNIME AG. All rights reserved. 126


Recap: Lag Column Node

§ Append past values as new columns


§ Shift cells l (lag interval) steps up
§ Duplicate the lag column L (lag value) times.
In each column the rows are shifted l, 2*l, .., L*l steps up

© 2021 KNIME AG. All rights reserved. 127


Hyperparameter Optimization
Performance on Tree Depth

§ 1 v 10 v 100 performance
§ Can we automate the selection?

Max Tree Depth R^2 MAPE RMSE

1 0.495 8.65 2.45

10 0.954 2.918 .736

100 0.957 3.16 0.713

© 2021 KNIME AG. All rights reserved. 129


Hyperparameter Optimization

§ Some modeling approaches are very sensitive to their configuration.


§ Calculating optimum settings is not always possible.
§ Hyperparameter Optimization loops may help find a good configuration

© 2021 KNIME AG. All rights reserved. 130


New Node: Parameter Optimization Loop Start

§ Define some parameters to optimize


§ Set upper/lower bounds and step sizes
(and flag integers)
§ Choose an optimization method
§ Brute force for maximum accuracy but slower
computation
§ Hillclimbing for better faster runtimes but may get
stuck in local optimum settings
§ Random search to randomly search for
parameter values within a given range
§ Bayesian Optimization (TPE)

© 2021 KNIME AG. All rights reserved. 131


New Node: Parameter Optimization Loop End

§ Collects a value to optimize as flow variable.


§ Value may be maximized (accuracy)
or minimized (error)

© 2021 KNIME AG. All rights reserved. 132


Time Series Analysis with LSTM
Units in Deep Learning Networks
What is Deep Learning?

§ Deep Learning extends the family of Artificial Neural Networks with:


§ Deeper architectures
§ A few additional paradigms, e.g. Recurrent Neural Networks (RNNs)

§ The algorithms to train such networks are not new, but they have been enabled
by recent advances in hardware performance and parallel execution.

© 2021 KNIME AG. All rights reserved. 134


Feed-Forward vs. Recurrent Neural Networks
x1

x2

x3

x4 y1

x5

y2
x6

x7
y3
x8

x9

x10

x11

© 2021 KNIME AG. All rights reserved. 135


Unrolling RNNs through time

Image Source: Christopher Olah, “Understanding LSTM Networks”


https://colah.github.io/posts/2015-08-Understanding-LSTMs/

§ Capture the dynamics of a sequence through a loop connection


§ RNNs are not constrained to a fixed sequence size
§ Trained via BPTT (Back-Propagation Through Time)

© 2021 KNIME AG. All rights reserved. 136


LSTM = Long Short Term Memory Unit

§ Special type of units with three gates


§ Input gate
§ Forget gate
§ Output gate

Forget Input Output


gate gate gate

Image Source: Christopher Olah, “Understanding LSTM Networks”


https://colah.github.io/posts/2015-08-Understanding-LSTMs/

© 2021 KNIME AG. All rights reserved. 137


The KNIME Keras Integration for Deep Learning

§ The KNIME Deep Learning intergation that we will use is based on Keras
§ We need to install:
§ KNIME Deep Learning Keras Integration
§ Keras
§ Python
§ The Keras integration includes:
§ Activation Functions
§ Neural Layers (many!!!)
§ Learners / Executors
§ Layer Freezer
§ Network Reader/Writer
§ Network Converter to TensorFlow

© 2021 KNIME AG. All rights reserved. 138


LSTM based Architecture

k LSTM units

x(t-n) ... x(t-2) x(t-1)

...
x(t)

ReLu

© 2021 KNIME AG. All rights reserved. 139


LSTM based Networks: Deep Learning nodes

In: Supplementary Workflows/02_LSTM_Networks

© 2021 KNIME AG. All rights reserved. 140


Results from LSTM based Network

§ Out-sample Static testing Past Out-sample Dynamic testing


Past RMSE MAE MAPE R^2

10h 0.646 0.424 0.064 0.985 10 hours


100h 0.474 0.312 0.047 0.992

200h 0.522 0.347 0.051 0.992

100 hours
§ Out-sample Dynamic testing
Past RMSE MAE MAPE R^2

10h 7.911 6.521 0.988 -2.115


200 hours
100h 4.107 2.637 0.286 0.167

200h 2.624 1.742 0.225 0.662

© 2021 KNIME AG. All rights reserved. 141


Deployment
Saving and Reading Models

§ Model Writer § Model Reader


§ After training your model attach the output § Use the Model Reader node to load a
of your learner node to the Model Writer to saved model and attach this to your
save your trained model. Predictor for use in deployment.

© 2021 KNIME AG. All rights reserved. 144


Recursive Loop Nodes

§ The Recursive Loop Start and End nodes pass data back to the start of the loop
with every iteration.
§ This enables us to generate predictions based on predictions.

© 2021 KNIME AG. All rights reserved. 145


Model Deployment Workflow

§ Generate dynamic predictions for a


selected forecast horizon
§ How well does the Forecast hold
up on dynamic predictions?

In: Supplementary Workflows/03_Deployment_and_Signal_Reconstruction

© 2021 KNIME AG. All rights reserved. 146


Time Series Analysis on Spark
What is Spark? And why should we use it?

§ Spark is a general-purpose distributed data processing engine.


§ Application developers incorporate Spark into their applications to rapidly query,
analyze, and transform data at scale.
§ Tasks most frequently associated with Spark include ETL and SQL batch jobs
across large data sets, processing of streaming data from sensors, and machine
learning tasks.

Spark
Spark Graph
MLlib Streami
SQL ng X

Apache Spark

© 2021 KNIME AG. All rights reserved. 148


Taxi Demand Prediction on Spark – KNIME Blog

https://www.knime.com/blog/time-series-analysis-a-simple-example-with-knime-and-spark

© 2021 KNIME AG. All rights reserved. 149


Taxi Demand Prediction: Connect to Spark

Creates a local Spark cluster.


To connect to an external Spark cluster
use „Create Spark Context“

© 2021 KNIME AG. All rights reserved. 150


Taxi Demand Prediction: Training

© 2021 KNIME AG. All rights reserved. 151


Taxi Demand Prediction: Deployment

© 2021 KNIME AG. All rights reserved. 152


IoT References on the KNIME Hub

https://kni.me/w/b-rFpW9Oueg0GhuN https://kni.me/w/vEaDHqWycVG-42ti

© 2021 KNIME AG. All rights reserved. 153


Exercise 4: Machine Learning

§ Predict the residual of the energy


consumption with a Random Forest
model and Linear Regression model
§ Use ten past values for prediction
§ Evaluate the prediction results

© 2021 KNIME AG. All rights reserved. 154


Exercise 5: Hyper Parameter Optimization

§ Find for the best number of trees and tree depth


that give the highest accuracy of the Random
Forest model. Test the following values:
§ Number of trees: min=5, max=100
§ Tree depth: min = 1, max = 20

§ Optional: Train a Random Forest model using


the best performing parameters

© 2021 KNIME AG. All rights reserved. 155


Real Time Streaming
What is Real-Time Streaming?

§ In real-time data streaming, big volumes of data are received and processed
quickly as soon as they are available.
§ “Quickly” and “as soon as available” are two important factors, since they allow
a reaction to changing conditions in real time.

Receiver

© 2021 KNIME AG. All rights reserved. 157


Simple Streaming Execution – In batches

§ When the first node has processed the first


batch, it passes it to the next node which can
then already begin with its processing.

Only for Components

Only for most Nodes

© 2021 KNIME AG. All rights reserved. 158


Streaming Solution: via Scheduler in KNIME Server

§ Smallest time resolution: by the Minute


§ Sometimes this is enough.

© 2021 KNIME AG. All rights reserved. 159


Streaming Solution: on demand via REST service

§ The Job Pool keeps max. N


active REST jobs at the same
time. This speeds up the
execution ofup to N
concurrent REST calls.

Job pool size N in “Properties”

REST Response
REST Request

© 2021 KNIME AG. All rights reserved. 160


What is Kafka?

§ Apache Kafka is a community distributed streaming platform capable of


handling trillions of events a day.
§ Initially conceived as a messaging queue, Kafka is based on an abstraction of a
distributed commit log.
§ Since being created and open sourced by LinkedIn in 2011, Kafka has quickly
evolved from messaging queue to a full-fledged event streaming platform.
§ KNIME Kafka Integration:

© 2021 KNIME AG. All rights reserved. 161


Streaming Solution: with Kafka

In: Kafka_write_and_read workflow on the KNIME Hub

© 2021 KNIME AG. All rights reserved. 162


References

§ Hyndman, Rob J., and George Athanasopoulos. Forecasting: principles and


practice. OTexts, 2018.
§ Gilliland, Michael, Len Tashman, and Udo Sglavo. Business forecasting:
Practical problems and solutions. John Wiley & Sons, 2016.
§ Franses, Philip Hans, and Philip Hans BF Franses. Time series models for
business and economic forecasting. Cambridge university press, 1998.
§ Chatfield, Chris, and Haipeng Xing. The analysis of time series: an introduction
with R. CRC press, 2019.

© 2021 KNIME AG. All rights reserved. 164


Thank You!
[email protected]

You might also like