0% found this document useful (0 votes)
20 views93 pages

Introduction To Time Series Analysis

This document provides an introduction to time series analysis, covering key concepts such as linear systems, nonlinear dynamics, and rule induction. It explains the importance of time series in data science, including its applications in predicting future trends and detecting anomalies. The document also details various methods for analyzing time series data, including moving averages, autoregressive models, and ARIMA, along with real-life examples for better understanding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views93 pages

Introduction To Time Series Analysis

This document provides an introduction to time series analysis, covering key concepts such as linear systems, nonlinear dynamics, and rule induction. It explains the importance of time series in data science, including its applications in predicting future trends and detecting anomalies. The document also details various methods for analyzing time series data, including moving averages, autoregressive models, and ARIMA, along with real-life examples for better understanding.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 93

INTRODUCTION

TO TIME SERIES
ANALYSIS: LINEAR
SYSTEMS &
NONLINEAR
DYNAMICS, RULE
INDUCTION

Raajeev H Dave
SESSION
OVERVIEW
• Part 1: Introduction & Basics
• Part 2: Linear Systems
Analysis
• Part 3: Nonlinear Dynamics in
Time Series
• Part 4: Rule Induction in Time
Series
• Part 5: Q&A & Discussion
WHAT IS TIME SERIES?
(IN SIMPLE WORDS)
• Imagine you are keeping track of the
outside temperature every hour. You
write down the temperature at 8
AM, 9 AM, 10 AM, and so on. If
you look at this list later, you will see
how the temperature is changing
over time.
• A Time Series is just a collection of
numbers recorded at different
times in a specific order.
• In simple terms, time series =
data collected over time.
REAL-LIFE EXAMPLES OF TIME SERIES
DATA
• We see time series data everywhere in real life. Here are some simple
examples:
• 1️⃣Stock Prices 🏦
• The price of a company’s stock goes up and down every second.
• Investors analyze past stock prices to predict future prices.
• 2️⃣Weather Data 🌦
• The temperature, humidity, and rainfall are measured every day.
• Weather scientists use past data to forecast tomorrow’s weather.
REAL-LIFE EXAMPLES OF TIME SERIES
DATA
• 3️⃣Heart Rate Monitoring 💓
• Smartwatches record heartbeats every second.
• Doctors check this data to detect heart problems.
• 4️⃣Electricity Usage 💡
• Power companies measure electricity use every hour.
• They use this data to prevent power failures.
• 5️⃣Traffic Data 🚦
• The number of cars on the road is counted every minute.
• This helps in predicting traffic jams.
WHY IS TIME SERIES IMPORTANT IN
DATA SCIENCE & MACHINE LEARNING?
• Time series is important because many things change over time, and we
want to understand the past and predict the future.
• 1️⃣Predicting the Future 🔮
• If we analyze past sales, we can predict next month's sales.
• If we study past weather, we can forecast next week's temperature.
• 2️⃣Finding Patterns 📊
• Time series helps in detecting seasonal trends (like more ice cream sales in
summer).
• Businesses use this to plan ahead.
WHY IS TIME SERIES IMPORTANT IN
DATA SCIENCE & MACHINE LEARNING?
• 3️⃣Anomaly Detection 🚨
• Banks check time series of transactions to catch fraud.
• Factories use time series to detect machine failures before they happen.
• 4️⃣Optimizing Resources 💰
• Electricity companies use time series to reduce power wastage.
• Hospitals use patient data to allocate doctors and beds efficiently.
FINAL SUMMARY

• Time series is all about studying how things change over time, so we can
understand patterns, detect problems, and make better predictions for
the future.
• Would you like a simple Python example to show how time series works?
TYPES OF TIME SERIES DATA (IN SIMPLE
WORDS)
• Time series data can be divided into two major types based on:
1️⃣How the data behaves over time (Stationary vs. Non-Stationary)
2️⃣How many things we are measuring (Univariate vs. Multivariate)
• Let’s break it down in a simple way with real-life examples.
STATIONARY VS. NON-STATIONARY TIME
SERIES
• A time series can either stay stable over time or keep changing.
• ✅ Stationary Time Series (Stable & Predictable)
• A stationary time series means that its pattern does not change over time.
• The average (mean) and variation (spread of data) remain constant.

• 🔹 Example:
📌 Heart Rate at Rest
• If you check your heart rate while sitting still, it stays around 70 beats per minute.
• It might slightly go up or down but generally stays within a fixed range.
• This is a stationary time series because it has a stable pattern.

• Why is it useful?
• Stationary data is easier to analyze and make predictions.
NON-STATIONARY TIME SERIES
(CHANGES OVER TIME)
• A non-stationary time series keeps changing its pattern over time.
• The average or variation is not constant.
• 🔹 Example:
📌 Stock Prices
• A company’s stock price keeps increasing or decreasing over months.
• There is no fixed average price—it keeps changing over time.
• This is a non-stationary time series because it has a changing trend.

• Why is it challenging?
• Harder to predict because the pattern keeps changing.
• We often convert non-stationary data into stationary (by removing trends).
UNIVARIATE VS. MULTIVARIATE TIME
SERIES
• This depends on how many things we are measuring over time.
• ✅ Univariate Time Series (One Variable Over Time)
• A single type of data recorded over time.
• 🔹 Example:
📌 Daily Temperature
• If we measure only the temperature every day, this is univariate.
• We are tracking just one thing (temperature) over time.
• Why is it useful?
• Simple to analyze and forecast.
MULTIVARIATE TIME SERIES (MULTIPLE
VARIABLES OVER TIME)
• More than one type of data recorded at the same time.
• 🔹 Example:
📌 Weather Report (Temperature + Humidity + Wind Speed)
• Every day, we measure:
✅ Temperature (e.g., 30°C)
✅ Humidity (e.g., 60%)
✅ Wind Speed (e.g., 10 km/h)
• This is multivariate because we are tracking multiple things at once.
• Why is it useful?
• It gives a fuller picture (like how humidity + wind speed affect temperature).
• Helps in better predictions.
FINAL SUMMARY (IN ONE LINE)

• ✔ Stationary Time Series = Stable & predictable (like resting heart rate).
❌ Non-Stationary Time Series = Always changing (like stock prices).
✔ Univariate Time Series = One thing measured over time (like daily
temperature).
✔ Multivariate Time Series = Multiple things measured together (like
weather conditions).
LINEAR SYSTEMS ANALYSIS IN TIME
SERIES (IN SIMPLE WORDS)
• When analyzing time series, we need methods to understand trends and
make predictions. Some of the most popular methods are based on linear
systems (meaning they assume data follows a straight-line relationship).
CLASSICAL LINEAR METHODS FOR TIME
SERIES ANALYSIS
• These methods assume that past values can help predict future values
using simple mathematical rules.
MOVING AVERAGES (SMA & EMA) –
SMOOTHING THE DATA
• Moving averages help us remove noise (random ups and downs) from data and find the main trend.
• ✅ Simple Moving Average (SMA) – The Basic Method
• What it does:
• Takes the average of the last few values to smooth the data.
• Helps to see the trend clearly.
• 🔹 Example:
Imagine tracking your weekly sales:
• Week 1: 50 sales
• Week 2: 60 sales
• Week 3: 70 sales
• Week 4: 80 sales
• If we take a 3-week moving average, we calculate:
• (50+60+70)/3=60
• (60+70+80)/3=70
• This removes sudden jumps and shows a smooth trend.
EXPONENTIAL MOVING AVERAGE (EMA) –
GIVING MORE IMPORTANCE TO RECENT DATA
• Unlike SMA, which treats all data points equally, EMA gives more weight to
recent values.
• This makes it more responsive to changes.
• 🔹 Example:
If a company’s stock suddenly starts increasing, EMA will catch it faster
than SMA.
AUTOREGRESSIVE (AR) AND MOVING
AVERAGE (MA) MODELS
• ✅ Autoregressive Model (AR) – Using Past Values to Predict Future
• The AR model predicts the next value based on previous values.
• It assumes that today’s value depends on yesterday’s values.
• 🔹 Example:
Imagine predicting today’s temperature based on the last 3 days:
• Today′sTemperature=0.6×(Yesterday′s)+0.3×(DaybeforeYesterday)
+0.1×(ThreeDaysAgo)
• If past temperatures were 28°C, 30°C, and 32°C, then:
• PredictedTemp=(0.6×30)+(0.3×28)+(0.1×32)=29.8°

• This is an AR(3) model because it uses 3 past values.


THE VALUES 0.6, 0.3, AND 0.1 IN THE
FORMULA REPRESENT WEIGHTS
• What do these weights mean?
1.0.6 (Yesterday’s Temperature) → This has the highest weight, meaning
yesterday’s temperature has the most influence on today’s prediction.
2.0.3 (Day Before Yesterday’s Temperature) → This has a lower weight, meaning
the temperature two days ago still affects today’s temperature but less than
yesterday’s.
3.0.1 (Three Days Ago Temperature) → This has the lowest weight, meaning the
temperature from three days ago has the least impact on today’s prediction.
WHY DIFFERENT WEIGHTS?

• Recent days are more relevant than older days.


• The sum of weights (0.6 + 0.3 + 0.1 = 1.0) ensures a proper balance in
prediction.
EXAMPLE CALCULATION

• If past temperatures were:


• Yesterday = 30°C
• Day before Yesterday = 28°C
• Three Days Ago = 32°C
• Formula:
• PredictedTemperature=(0.6×30)+(0.3×28)+(0.1×32)
• Calculation:
• =(18)+(8.4)+(3.2)=29.8°C
WHAT IS AR(3)?

• AR(3) means AutoRegressive model of order 3, meaning it uses the last 3


observations to predict the next one.
• The model assumes that today's temperature is a linear combination of past
temperatures with decreasing importance as time goes backward.
REAL-WORLD USE CASE

• 📌 Weather Forecasting → AR models like this help meteorologists predict


temperatures based on recent patterns.
MOVING AVERAGE MODEL (MA) – USING
PAST ERRORS TO PREDICT
• Instead of using past values directly, the MA model uses past mistakes
(errors) to improve predictions.
• 🔹 Example:
Let’s say we predicted 30°C for today, but the actual temperature was 32°C
(we made a mistake of +2°C).
• The MA model learns from these mistakes to improve the next prediction.
ARMA AND ARIMA MODELS – COMBINING
METHODS FOR BETTER PREDICTIONS
• ARMA (Autoregressive Moving Average) – Best of Both Worlds
• AR (Autoregressive) + MA (Moving Average) = ARMA
• Uses both past values and past errors to make better predictions.
• 🔹 Example:
If you want to predict next month’s sales:
• AR part → Uses past sales numbers.
• MA part → Adjusts based on past mistakes in prediction
SIMPLE REAL-WORLD EXAMPLE

• Let’s say you are tracking daily sales at a small shop.


• Sales depend on previous days (AR component).
• Sales also have random fluctuations (MA component), like unexpected
weather changes or discounts.
FORMULA OF ARMA(1,1) (SIMPLE
CASE)
EXAMPLE CALCULATION

• Let’s assume:
• Yesterday’s Sales (Y_{t-1}) = 100 units
• Yesterday’s Error (e_{t-1}) = 5 units
• Today’s Random Error (e_t) = 2 units
• Now, using the formula:
• Yt=(0.7×100)+(0.3×5)+2
• = 70+1.5+2=73.5 units
• Predicted Sales for Today = 73.5 units
BREAKDOWN OF THE FORMULA:

• Breakdown of the Formula:


1.0.7 (AR Coefficient – AutoRegression Part)
1. This means 70% of today’s sales depend on yesterday’s sales.
2. A higher value (e.g., 0.9) would mean sales are highly dependent on the past.
3. A lower value (e.g., 0.3) would mean sales are less dependent on past sales.

2.0.3 (MA Coefficient – Moving Average Part)


1. This means 30% of yesterday’s error (prediction mistake) is used to adjust today’s prediction.
2. If the model overestimated sales yesterday, it will adjust downwards today.
3. If the model underestimated sales yesterday, it will adjust upwards today.

3.et​(Random Error Today)


1. This represents today’s unexpected factors (e.g., sudden weather change, a special event, or an
economic shift).
INTUITION BEHIND THESE WEIGHTS

• ✔ 0.7 → Past values are more important (long-term trends matter).


✔ 0.3 → Past mistakes are also important (we learn from errors).
✔ Sum is less than 1, meaning we balance past influences while allowing
randomness.
• Would you like to see how to estimate these weights using Python?
ARIMA (AUTOREGRESSIVE INTEGRATED
MOVING AVERAGE)
• ARMA only works if the data is stationary (no trend).
• ARIMA first removes the trend and then applies ARMA.
• 🔹 Example:
If a company’s sales keep growing every year, ARIMA first removes this
growth to find hidden patterns.
FINAL SUMMARY (IN ONE LINE)

• ✔ SMA & EMA → Smooth the data and show trends.


✔ AR Model → Predicts using past values.
✔ MA Model → Predicts using past mistakes.
✔ ARMA Model → Combines AR + MA for better accuracy.
✔ ARIMA Model → Removes trends before applying ARMA.

SMA, EMA, AR, MA, ARMA, and ARIMA - Jupyter Not


ebook
UNDERSTANDING TIME SERIES ANALYSIS
METHODS WITH REAL-LIFE EXAMPLES
• Time series data often contains patterns that help us make better
predictions. To analyze and forecast time series data, we use different
mathematical models. Below, I’ll explain each method in simple words with
real-life examples.
SMA & EMA – SMOOTHING THE DATA
AND SHOWING TRENDS
• What Do They Do?
• Smooth out fluctuations in data.
• Identify trends without being distracted by random ups and downs.
EXAMPLE 1: STOCK MARKET

• Imagine you are tracking the stock price of a company. Every day, the price
goes up and down, but you want to see the overall trend.
• 📌 SMA (Simple Moving Average)
• Takes the average of the last N days (e.g., 10 days).
• If the stock prices for 5 days are 100, 102, 98, 101, and 105, then the 3-day SMA
is: (100+102+98)/3=100
• The next day, it shifts forward and calculates a new average.
EXAMPLE 1: STOCK MARKET

• EMA (Exponential Moving Average)


• Similar to SMA, but gives more weight to recent prices.
• If today’s stock price changes suddenly, EMA will react faster than SMA.
EXAMPLE 1: STOCK MARKET

• How They Help:


• Investors use SMA & EMA to decide when to buy or sell stocks.
• If the SMA is going up, the stock is in an uptrend → Buy.
• If the SMA is going down, the stock is in a downtrend → Sell.
EXAMPLE 2: WEATHER FORECASTING

• The temperature of a city fluctuates daily, but SMA and EMA can help
predict trends (e.g., seasonal changes).
• SMA: A 7-day SMA smooths daily fluctuations and shows the weekly trend.
• EMA: If today’s temperature suddenly rises due to a heatwave, EMA reacts
faster than SMA.
AR MODEL – USING PAST VALUES TO
PREDICT FUTURE VALUES
• What Does It Do?
• Uses past data points to predict the next value.
• Assumes that what happened in the past influences the future.
EXAMPLE 1: HOUSE PRICE PREDICTION

• Imagine you want to predict the price of a house next month.


• If house prices have been increasing steadily for 6 months, it’s likely they will
continue increasing.
• The AR model uses past prices to forecast future prices.
• Example:
• Price Today=0.7×Price Yesterday+0.3×Price 2 Days Ago
• If the past two days’ prices were ₹50 lakh and ₹52 lakh, then:
• Predicted Price=(0.7×52)+(0.3×50)=₹51.4
EXAMPLE 2: SALES FORECASTING

• A retail store wants to predict next week’s sales.


• If weekend sales are always high, the AR model learns this pattern.
• It predicts that sales will be higher again next weekend.
MA MODEL – USING PAST MISTAKES TO
IMPROVE PREDICTIONS
• What Does It Do?
• Instead of past values, it learns from past prediction errors.
• Adjusts future predictions based on previous mistakes.
EXAMPLE 1: WEATHER FORECASTING

• Example 1: Weather Forecasting


• Suppose last week’s weather forecast predicted 30°C, but the actual
temperature was 32°C (error = +2°C).
• The MA model learns this error and corrects the next forecast.
• If it predicts 28°C next time, it will adjust by +2°C, making it 30°C instead.
EXAMPLE 2: DEMAND PREDICTION IN A
SUPERMARKET
• A supermarket predicts demand for milk next week.
• If last week, they underestimated demand by 100 liters, the MA model
adds 100 liters to the next week’s prediction.
ARMA MODEL – COMBINING AR & MA
FOR BETTER ACCURACY
• What Does It Do?
• AR (Autoregressive) → Uses past values.
• MA (Moving Average) → Uses past mistakes.
• ARMA combines both for a more accurate prediction.
EXAMPLE 1: PREDICTING ELECTRICITY
CONSUMPTION
• Your electricity usage depends on past usage (AR) and unexpected
changes like a heatwave (MA).
• ARMA considers both factors for better predictions.
• Example:
• Power Usage Today=0.5×Usage Yesterday+0.2×Prediction Error Last Week
EXAMPLE 2: PREDICTING HOTEL
BOOKINGS
• A hotel predicts how many rooms will be booked next month.
• AR Model: Uses past booking trends.
• MA Model: Adjusts for errors in previous predictions (e.g., holidays).
• ARMA Model: Combines both for a better forecast.
ARIMA MODEL – REMOVING TRENDS
BEFORE APPLYING ARMA
• What Does It Do?
• ARMA works only for stationary data (no overall trend).
• If there’s a trend (e.g., stock prices always increasing), ARIMA removes it
first.
EXAMPLE 1: PREDICTING MONTHLY
SALES GROWTH
• A business sees a steady increase in sales every month.
• ARIMA first removes this trend (by calculating the difference between months).
• Then, it applies ARMA to find patterns in the remaining data.
EXAMPLE 2: PREDICTING POPULATION
GROWTH IN A CITY
• The population of a city keeps increasing every year.
• If you apply ARMA directly, it won’t work well because of the upward trend.
• ARIMA first removes the trend, then finds hidden patterns in the remaining
data.
FINAL SUMMARY IN SIMPLE WORDS

Model What It Does Real-Life Example

Smooths the data to show Stock prices, Weather


SMA & EMA
trends forecasting

Uses past values to predict House prices, Sales


AR Model
future forecasting

Adjusts predictions based Weather correction,


MA Model
on past mistakes Supermarket demand

Combines AR + MA for Electricity usage, Hotel


ARMA Model
better accuracy bookings

Removes trends before Monthly sales,


ARIMA Model
applying ARMA Population growth
UNDERSTANDING FOURIER TRANSFORM
FOR TIME SERIES ANALYSIS
• When analyzing time series data, we often look at patterns over time.
However, some patterns are hidden and difficult to spot just by looking at
the raw data.
• Fourier Transform helps by breaking down a time series into different
frequency components, making it easier to see repeating cycles and
trends.
• Let’s explain this in simple words with real-life examples.
WHAT IS THE FOURIER TRANSFORM?

• Simple Explanation:
• It converts a time-based signal (something that changes over time) into a
frequency-based signal (how often something repeats).
• It helps us identify hidden cycles in data.
• Think of it like turning a song into individual musical notes. Instead of
hearing a song as a mix of sounds, you break it down into different beats and
tones.
REAL-LIFE EXAMPLE 1: WEATHER
PATTERNS
• Imagine you are tracking the temperature of a city over a year.
• Looking at daily temperature data, you see ups and downs, but it’s hard to
see any clear pattern.
• The Fourier Transform helps reveal seasonal cycles:
• A 1-year cycle (summer, winter, etc.).
• A weekly cycle (weekend temperature shifts).
• A daily cycle (temperature is lower at night, higher in the day).
• 📌 How It Helps:
• We can predict temperature for future days or weeks based on these
patterns.
REAL-LIFE EXAMPLE 2: STOCK MARKET
ANALYSIS
• Stock prices move up and down daily, making it hard to see trends.
• Fourier Transform can identify hidden cycles like:
• Weekly trends (Monday dips, Friday spikes).
• Quarterly earnings cycles.
• Long-term market trends.
• 📌 How It Helps:
• Traders use Fourier analysis to identify patterns and make better investment
decisions.
REAL-LIFE EXAMPLE 3: IOT SENSOR
DATA (SMART HOME DEVICES)
• Your smart home temperature sensor records data every minute.
• You want to understand how often the air conditioning turns on/off.
• Fourier Transform can find patterns in temperature changes and predict
when the AC should turn on.
• 📌 How It Helps:
• Helps optimize energy consumption by detecting unnecessary power usage.
FREQUENCY DOMAIN ANALYSIS: WHY
DO WE NEED IT?
• Simple Explanation:
• Normally, we analyze time series in the time domain (looking at values over time).
• Frequency domain analysis helps us see cycles and repeating patterns more
clearly.
• Think of a car engine:
• Instead of looking at the engine’s speed over time, we analyze its vibration
frequency to find if something is wrong.
REAL-LIFE EXAMPLE 5: MUSIC & SOUND
PROCESSING
• A song is a mixture of different frequencies (bass, vocals, instruments).
• Fourier Transform breaks down music into different sound frequencies.
• This is how equalizers adjust bass and treble in music apps.
• 📌 How It Helps:
• Improves sound quality and allows noise removal in audio recordings.
FINAL SUMMARY IN SIMPLE WORDS

Concept What It Does Real-Life Example

Converts time-based Weather patterns,


Fourier Transform data into frequency- Stock market cycles,
based data IoT sensor data

Machine vibration
Helps identify hidden
Frequency Domain Analysis analysis, Music
repeating cycles
equalization
WHY DO LINEAR MODELS FAIL IN TIME
SERIES ANALYSIS?
• Linear models (like AR, MA, ARMA, ARIMA) are great for simple patterns, but
they often fail when time series data becomes too complex.
• There are three main reasons why linear models fail:
1️⃣Chaos (Unpredictable changes)
2️⃣Non-Stationarity (Changing patterns over time)
3️⃣Non-Linearity (Relationships that are not straight-line)
• Let’s explain these in simple words with real-life examples.
CHAOS – WHEN SMALL CHANGES LEAD
TO BIG EFFECTS
• 📌 Simple Explanation:
• Some systems are so sensitive that a tiny change today can cause a huge difference in the future.
• This is called the "Butterfly Effect" (a butterfly flapping its wings in one place could cause a storm
somewhere else).
• 🟢 Real-Life Example: Stock Market 📈
• Imagine you have a simple stock price prediction model based on past prices.
• But unexpected news (e.g., a new government policy, a big company's bankruptcy) can suddenly
change everything.
• A linear model assumes the past determines the future, but in chaotic systems like the stock
market, tiny events can cause big swings.
• 📌 Why Linear Models Fail?
• Linear models assume a predictable pattern, but chaotic systems don’t follow fixed rules.
NON-STATIONARITY – WHEN DATA
PATTERNS KEEP CHANGING
• 📌 Simple Explanation:
• If a time series has a changing trend, it is called non-stationary.
• Linear models assume data has a constant average and variance, but real-world data often
changes over time.
• 🟢 Real-Life Example: Weather Forecasting
• Imagine you are trying to predict rainfall based on past data.
• If you live in a place where climate change is affecting rainfall, past data won’t match future
trends.
• A linear model assumes the weather follows past trends, but in reality, climate patterns
shift over decades.
• 📌 Why Linear Models Fail?
• Linear models assume the data is stable over time, but in real life, patterns evolve.
NON-LINEARITY – WHEN RELATIONSHIPS
ARE NOT STRAIGHT-LINE
• 📌 Simple Explanation:
• Linear models assume that one thing affects another in a straight-line way.
• But in many cases, small changes in one factor can lead to unexpected, large effects.
• 🟢 Real-Life Example: Traffic Prediction 🚗
• Suppose you are predicting how long it takes to get home from work based on past traffic data.
• A linear model might assume:
• "More cars on the road → More traffic → Longer travel time."

• But in real life:


• A small accident can suddenly cause massive traffic jams.
• A small roadblock might make traffic spread unpredictably to other roads.
• The relationship isn’t linear—small factors can have huge effects.
• 📌 Why Linear Models Fail?
• Linear models assume everything follows a smooth pattern, but in real life, small changes can create non-
linear effects.
FINAL SUMMARY IN SIMPLE WORDS

Why Linear Models


Problem What Happens? Real-Life Example
Fail?

Tiny changes lead to huge Linear models assume


Chaos Stock Market crashes 📉
effects predictable patterns

Data patterns keep Climate Change & Linear models assume


Non-Stationarity
changing Weather stable trends

Relationships are not Linear models assume


Non-Linearity Traffic jams 🚗
straight-line smooth relationships
NONLINEAR MODELS IN TIME SERIES
ANALYSIS
• Linear models are good for simple patterns, but fail when data is chaotic,
non-stationary, or nonlinear. That’s where nonlinear models come in!
• There are three main nonlinear approaches to handling complex time series
data:
1️⃣Neural Networks (RNNs, LSTMs) – Learn patterns from past data.
2️⃣Chaos Theory (Lyapunov Exponent, Attractors) – Helps understand
unpredictable systems.
3️⃣Fractals & Self-Similarity – Finds repeating patterns at different scales.
• Let’s explain each in simple words with real-life examples.
NEURAL NETWORKS (RNN, LSTMS) –
LEARNING FROM PAST DATA
• 📌 Simple Explanation:
• A neural network is like a smart brain that learns from past data and predicts the
future.
• Traditional models (like ARIMA) only look at a few past points, but neural
networks can learn complex relationships over long periods.
NEURAL NETWORKS (RNN, LSTMS) –
LEARNING FROM PAST DATA
• 🟢 Real-Life Example: Voice Assistants (Alexa, Siri)
• When you say, "Hey Siri, play my favorite song," it remembers your past song
choices to predict what you like.
• It doesn’t just look at your last song; it learns long-term patterns using models
like LSTMs.
• 📌 Why Neural Networks Work?
• RNN (Recurrent Neural Networks): Good for short-term memory but struggles
with long sequences.
• LSTMs (Long Short-Term Memory): Better for remembering long-term trends,
perfect for weather forecasting, stock prices, and speech recognition.
CHAOS THEORY IN TIME SERIES –
UNDERSTANDING UNPREDICTABILITY
• 📌 Simple Explanation:
• Some time series look random but actually follow hidden rules.
• Chaos Theory helps us find patterns in unpredictable data using two key ideas:
• 🔹 (a) Lyapunov Exponent – How fast things become unpredictable
• If a system has a high Lyapunov Exponent, small errors grow very fast, making
long-term prediction impossible.
CHAOS THEORY IN TIME SERIES –
UNDERSTANDING UNPREDICTABILITY
• 🟢 Real-Life Example: Weather Forecasting
• Why can we only predict the weather accurately for 7-10 days?
• Because weather is chaotic, and small changes (temperature, wind) grow exponentially over time.

• 📌 Why is this important?


• If we know the Lyapunov Exponent, we can estimate how far ahead predictions will be accurate.
• 🔹 (b) Attractors – Hidden patterns in chaos
• Even chaotic systems often move around specific regions called attractors.
• This helps us understand patterns in seemingly random data.

• 🟢 Real-Life Example: Heartbeats ❤️


• Your heartbeat isn’t perfectly regular, but it follows a pattern.
• Doctors use chaos theory to detect heart diseases by analyzing how heart rate changes over time.

• 📌 Why Chaos Theory Helps?


• It doesn’t try to predict exact values, but it identifies patterns of change.
FRACTALS & SELF-SIMILARITY – REPEATING
PATTERNS AT DIFFERENT SCALES
• 📌 Simple Explanation:
• A fractal is a pattern that repeats itself at different scales.
• Many time series data show self-similarity, meaning small parts look similar to
the whole.
FRACTALS & SELF-SIMILARITY – REPEATING
PATTERNS AT DIFFERENT SCALES
• 🟢 Real-Life Example: Stock Market Charts 📈
• If you zoom into a 1-hour stock price chart, it often looks similar to a 1-day or
1-week chart.
• The same trends appear at different time scales.
• This is why technical traders use fractals to predict price movements.
• 📌 Why Fractals Matter?
• Helps us understand long-term dependencies in data.
• Used in image compression, financial modeling, and even biology.
FINAL SUMMARY: WHEN TO USE
NONLINEAR MODELS?

Solution
Real-Life
Problem What Happens? (Nonlinear
Example
Model)
Simple models Alexa/Siri voice
Long-Term Memory RNN, LSTMs
forget past patterns commands 🎤
Chaos Theory Weather
Small changes
Chaos & Unpredictability (Lyapunov, Forecasting ,
cause big effects
Attractors) Heartbeats ❤️
Small patterns
Fractals & Self- Stock Market
Self-Similarity repeat in big
Similarity Trends 📈
patterns
UNDERSTANDING RNN AND LSTM WITH
REAL-LIFE EXAMPLES
• Time series data often requires models that can remember past information
to make accurate predictions. Traditional models like ARIMA can handle simple
patterns, but struggle when long-term memory is needed.
• 👉 Solution? Recurrent Neural Networks (RNNs) and Long Short-Term
Memory (LSTMs)!
WHAT IS AN RNN (RECURRENT NEURAL
NETWORK)?
• 📌 Simple Explanation:
• Imagine you’re reading a book 📖. Each new sentence makes sense only because
you remember the previous ones.
• Similarly, RNNs process data one step at a time, remembering past information to
improve predictions.
• Unlike regular neural networks, RNNs have loops to keep information from previous
steps.
• 🔵 Real-Life Example: Predicting the Next Word in a Sentence
• If I say, "The cat sat on the ...", you can easily predict "mat" because you remember
the earlier words.
• RNNs work the same way! They use past words to predict the next one.
WHAT IS AN RNN (RECURRENT NEURAL
NETWORK)?
• 🔹 How RNN Works Step by Step?
• 1️⃣Takes input data (current step) ➝ Example: A word in a sentence.
2️⃣Processes it with memory (previous steps) ➝ Remembers past words.
3️⃣Outputs prediction ➝ Suggests the next word.
4️⃣Loops back to step 1 with updated memory.
• 🟢 Where is RNN Used?
• Speech Recognition (Google Assistant, Alexa)
• Language Translation 🌎 (Google Translate)
• Stock Price Prediction 📈 (Learning from past trends)
PROBLEM WITH RNN: SHORT-TERM
MEMORY LOSS 🧠
• 📌 RNNs are great but forget long-term dependencies!
• If a book is 500 pages long, you might forget what happened on page 5 while
reading page 400.
• Similarly, RNNs struggle when long-term memory is needed.
• This is called the vanishing gradient problem – old information fades away as
the network processes more data.
• 👉 Solution? LSTMs! 🚀
WHAT IS LSTM (LONG SHORT-TERM
MEMORY)?
• 📌 Simple Explanation:
• LSTMs are a smarter version of RNNs.
• They use special memory cells to remember important information and
forget unimportant details.
• This helps them handle long-term dependencies better.
• 🔵 Real-Life Example: Watching a TV Show 🎬
• If you’re watching a long series like Game of Thrones, you might forget small
details from Episode 1, but remember key plot twists that affect the ending.
• LSTMs do the same! They store important events and forget less important
ones.
HOW LSTM WORKS?

• LSTMs have three main gates: Real-Life


Example
Gate What it does?
(Watching TV
Series 📺)
Removes Forget minor
Forget Gate
unnecessary info character names
Adds important new Remember main
Input Gate ✍️
info plot twists
Decides what to use Predict next
Output Gate 🚪
for prediction season's events
• 🟢 Where is LSTM Used?
• Chatbots 🤖 (AI remembering long conversations)
• Weather Forecasting (Long-term pattern analysis)
• Stock Market Prediction 📈 (Keeping track of past trends)
RNN VS. LSTM: KEY DIFFERENCES

Feature RNN LSTM


Short-
Memory Long-term
term
Handles long
❌ No ✅ Yes
sequences?
Solves vanishing
❌ No ✅ Yes
gradient?
Long
Next
conversatio
word
Example n
FINAL THOUGHTS: WHICH ONE TO USE?

• ✅ For simple sequences? Use RNN. (e.g., predicting next word in a sentence)
✅ For long-term memory? Use LSTM. (e.g., chatbot remembering long
conversations)
• 👉 Want to go even further? 🚀 Next step is Transformers (like GPT-4),
which outperform LSTMs for many tasks!
RNN and LSTM - Jupyter Notebook
WHAT DOES THIS CODE DO?

• Generates synthetic time series data (sine wave).


• Creates sequences of data for training the models.
• Trains an RNN and an LSTM model to predict the next value in the
sequence.
• Compares their predictions visually to see the difference in performance.
KEY OBSERVATIONS

• ✅ RNN works well for short-term memory but loses accuracy over longer
time steps.
✅ LSTM performs better for long-term dependencies due to its memory
cell mechanism.
CHAOS THEORY IN TIME SERIES
ANALYSIS
• Chaos theory helps us understand complex, unpredictable systems that
seem random but actually follow underlying rules. It is used to analyze
systems where small changes in the initial conditions can lead to huge
differences in outcomes—a concept popularly known as the Butterfly
Effect.
• For example:
🔹 A tiny change in weather conditions today can cause a storm next week.
🔹 A small difference in stock market data can lead to huge price fluctuations
over time.
KEY CONCEPTS IN CHAOS THEORY
• Lyapunov Exponent (Predictability of a System)
• The Lyapunov Exponent measures how quickly two nearly identical states in a system diverge over time.
• 📌 Interpretation:
• Positive Lyapunov Exponent (> 0): The system is chaotic—small differences in initial conditions grow exponentially.
Example: Weather changes, Stock market
• Zero Lyapunov Exponent (≈ 0): The system is on the edge of chaos—patterns exist but can break down.
• Negative Lyapunov Exponent (< 0): The system is stable—small changes fade away, and the system remains
predictable. Example: Simple pendulum without friction
• 📌 Real-life Example:
Consider two stock market investors starting with almost the same amount of money but making slightly
different investment choices. Over time, their portfolios may diverge drastically—one may become a
millionaire while the other goes bankrupt! This is chaos in action.
• Python Code to Calculate Lyapunov Exponent - Jupyter Notebook
ATTRACTORS (WHERE THE SYSTEM
MOVES TOWARDS)
• An attractor is a state where a chaotic system eventually settles down or
revisits frequently. There are three types of attractors:
• 📌 Real-life Example: Strange Attractor (Lorenz System) 🔹 The weather
system does not repeat but follows a specific pattern—it never settles into a
stable cycle but remains within certain limits.
🔹 The stock market fluctuates without following an exact pattern but still
stays within an upper and lower bound. Type Description Example
The system settles A pendulum at
Fixed-point attractor
at a stable value. rest.
A heartbeat or
The system moves seasonal
Limit cycle attractor
in a repeating cycle. weather
changes.
The system never Weather, stock
repeats exactly but market,
Strange attractor
stays within a turbulence in
range. airflows.
Python Code to Visualize the Lorenz Attractor -
Jupyter Notebook
SUMMARY

• ✔ Lyapunov Exponent tells us whether a system is chaotic.


✔ Attractors describe where the system eventually moves.
✔ Weather, stock markets, and even human heartbeats show chaotic
behavior.
✔ Python can help analyze and visualize chaos!
FRACTALS & SELF-SIMILARITY IN TIME
SERIES ANALYSIS
• Fractals and self-similarity describe patterns that repeat at different scales,
meaning small parts of a system look similar to the whole system. These
patterns are seen in nature, financial markets, weather systems, and
even human biology.
WHAT ARE FRACTALS?

• A fractal is a pattern that repeats infinitely at different levels of magnification. This means
that if you zoom in or zoom out, the pattern still looks similar.
• 📌 Real-life Examples of Fractals:
• Coastlines → No matter how much you zoom in on a coastline, the curves and edges look similar at
different levels.
• Snowflakes ❄️→ Each branch of a snowflake looks like the whole snowflake.
• Clouds & Mountains → No matter how closely you look, the shape always appears rough and irregular.
• Stock Market Trends 📉 → If you observe stock prices on different time scales (seconds, minutes, hours,
or days), they often show similar ups and downs.
• 📌 Mathematical Example:
One famous fractal is the Mandelbrot Set, where complex equations generate infinite
repeating patterns.
WHAT IS SELF-SIMILARITY?

• Self-similarity means that small parts of a system resemble the entire system.
• 📌 Real-life Examples of Self-Similarity:
• Branches of a Tree 🌳 → A small branch looks like the entire tree.
• Blood Vessels in the Human Body 🩸 → The branching pattern of veins follows the same
structure at different levels.
• River Networks 🌊 → A small tributary looks like a miniature version of the whole river
system.
• 👉 In Time Series Analysis, self-similarity means that patterns in the data repeat
at different time scales.
• Example: Stock market data → A small 1-hour fluctuation in stock prices may look
similar to aa1-month
Generating trend. (Sierpiński Triangle) - Jupyter Notebook
Simple Fractal
HOW ARE FRACTALS USED IN TIME
SERIES ANALYSIS?
• Fractals help us identify hidden patterns in chaotic time series data by
detecting self-similarity.
• 📌 Applications of Fractals in Time Series Analysis:
✔ Stock Market Prediction → Stock prices exhibit self-similarity, helping
traders analyze trends.
✔ Weather Forecasting → Cloud formations and temperature changes follow
fractal patterns.
✔ Biological Data Analysis → Heartbeats and brain waves show fractal
behavior.
SUMMARY

• ✔ Fractals → Patterns that repeat at different scales (e.g., trees, stock prices,
coastlines).
✔ Self-Similarity → Small parts of a system resemble the whole (e.g., river
networks, blood vessels).
✔ Used in Time Series Analysis → Helps in detecting patterns in finance,
weather, and biological systems.
✔ Python can generate fractals → Example: Sierpiński Triangle.

You might also like