Introduction To Time Series Analysis
Introduction To Time Series Analysis
TO TIME SERIES
ANALYSIS: LINEAR
SYSTEMS &
NONLINEAR
DYNAMICS, RULE
INDUCTION
Raajeev H Dave
SESSION
OVERVIEW
• Part 1: Introduction & Basics
• Part 2: Linear Systems
Analysis
• Part 3: Nonlinear Dynamics in
Time Series
• Part 4: Rule Induction in Time
Series
• Part 5: Q&A & Discussion
WHAT IS TIME SERIES?
(IN SIMPLE WORDS)
• Imagine you are keeping track of the
outside temperature every hour. You
write down the temperature at 8
AM, 9 AM, 10 AM, and so on. If
you look at this list later, you will see
how the temperature is changing
over time.
• A Time Series is just a collection of
numbers recorded at different
times in a specific order.
• In simple terms, time series =
data collected over time.
REAL-LIFE EXAMPLES OF TIME SERIES
DATA
• We see time series data everywhere in real life. Here are some simple
examples:
• 1️⃣Stock Prices 🏦
• The price of a company’s stock goes up and down every second.
• Investors analyze past stock prices to predict future prices.
• 2️⃣Weather Data 🌦
• The temperature, humidity, and rainfall are measured every day.
• Weather scientists use past data to forecast tomorrow’s weather.
REAL-LIFE EXAMPLES OF TIME SERIES
DATA
• 3️⃣Heart Rate Monitoring 💓
• Smartwatches record heartbeats every second.
• Doctors check this data to detect heart problems.
• 4️⃣Electricity Usage 💡
• Power companies measure electricity use every hour.
• They use this data to prevent power failures.
• 5️⃣Traffic Data 🚦
• The number of cars on the road is counted every minute.
• This helps in predicting traffic jams.
WHY IS TIME SERIES IMPORTANT IN
DATA SCIENCE & MACHINE LEARNING?
• Time series is important because many things change over time, and we
want to understand the past and predict the future.
• 1️⃣Predicting the Future 🔮
• If we analyze past sales, we can predict next month's sales.
• If we study past weather, we can forecast next week's temperature.
• 2️⃣Finding Patterns 📊
• Time series helps in detecting seasonal trends (like more ice cream sales in
summer).
• Businesses use this to plan ahead.
WHY IS TIME SERIES IMPORTANT IN
DATA SCIENCE & MACHINE LEARNING?
• 3️⃣Anomaly Detection 🚨
• Banks check time series of transactions to catch fraud.
• Factories use time series to detect machine failures before they happen.
• 4️⃣Optimizing Resources 💰
• Electricity companies use time series to reduce power wastage.
• Hospitals use patient data to allocate doctors and beds efficiently.
FINAL SUMMARY
• Time series is all about studying how things change over time, so we can
understand patterns, detect problems, and make better predictions for
the future.
• Would you like a simple Python example to show how time series works?
TYPES OF TIME SERIES DATA (IN SIMPLE
WORDS)
• Time series data can be divided into two major types based on:
1️⃣How the data behaves over time (Stationary vs. Non-Stationary)
2️⃣How many things we are measuring (Univariate vs. Multivariate)
• Let’s break it down in a simple way with real-life examples.
STATIONARY VS. NON-STATIONARY TIME
SERIES
• A time series can either stay stable over time or keep changing.
• ✅ Stationary Time Series (Stable & Predictable)
• A stationary time series means that its pattern does not change over time.
• The average (mean) and variation (spread of data) remain constant.
• 🔹 Example:
📌 Heart Rate at Rest
• If you check your heart rate while sitting still, it stays around 70 beats per minute.
• It might slightly go up or down but generally stays within a fixed range.
• This is a stationary time series because it has a stable pattern.
• Why is it useful?
• Stationary data is easier to analyze and make predictions.
NON-STATIONARY TIME SERIES
(CHANGES OVER TIME)
• A non-stationary time series keeps changing its pattern over time.
• The average or variation is not constant.
• 🔹 Example:
📌 Stock Prices
• A company’s stock price keeps increasing or decreasing over months.
• There is no fixed average price—it keeps changing over time.
• This is a non-stationary time series because it has a changing trend.
• Why is it challenging?
• Harder to predict because the pattern keeps changing.
• We often convert non-stationary data into stationary (by removing trends).
UNIVARIATE VS. MULTIVARIATE TIME
SERIES
• This depends on how many things we are measuring over time.
• ✅ Univariate Time Series (One Variable Over Time)
• A single type of data recorded over time.
• 🔹 Example:
📌 Daily Temperature
• If we measure only the temperature every day, this is univariate.
• We are tracking just one thing (temperature) over time.
• Why is it useful?
• Simple to analyze and forecast.
MULTIVARIATE TIME SERIES (MULTIPLE
VARIABLES OVER TIME)
• More than one type of data recorded at the same time.
• 🔹 Example:
📌 Weather Report (Temperature + Humidity + Wind Speed)
• Every day, we measure:
✅ Temperature (e.g., 30°C)
✅ Humidity (e.g., 60%)
✅ Wind Speed (e.g., 10 km/h)
• This is multivariate because we are tracking multiple things at once.
• Why is it useful?
• It gives a fuller picture (like how humidity + wind speed affect temperature).
• Helps in better predictions.
FINAL SUMMARY (IN ONE LINE)
• ✔ Stationary Time Series = Stable & predictable (like resting heart rate).
❌ Non-Stationary Time Series = Always changing (like stock prices).
✔ Univariate Time Series = One thing measured over time (like daily
temperature).
✔ Multivariate Time Series = Multiple things measured together (like
weather conditions).
LINEAR SYSTEMS ANALYSIS IN TIME
SERIES (IN SIMPLE WORDS)
• When analyzing time series, we need methods to understand trends and
make predictions. Some of the most popular methods are based on linear
systems (meaning they assume data follows a straight-line relationship).
CLASSICAL LINEAR METHODS FOR TIME
SERIES ANALYSIS
• These methods assume that past values can help predict future values
using simple mathematical rules.
MOVING AVERAGES (SMA & EMA) –
SMOOTHING THE DATA
• Moving averages help us remove noise (random ups and downs) from data and find the main trend.
• ✅ Simple Moving Average (SMA) – The Basic Method
• What it does:
• Takes the average of the last few values to smooth the data.
• Helps to see the trend clearly.
• 🔹 Example:
Imagine tracking your weekly sales:
• Week 1: 50 sales
• Week 2: 60 sales
• Week 3: 70 sales
• Week 4: 80 sales
• If we take a 3-week moving average, we calculate:
• (50+60+70)/3=60
• (60+70+80)/3=70
• This removes sudden jumps and shows a smooth trend.
EXPONENTIAL MOVING AVERAGE (EMA) –
GIVING MORE IMPORTANCE TO RECENT DATA
• Unlike SMA, which treats all data points equally, EMA gives more weight to
recent values.
• This makes it more responsive to changes.
• 🔹 Example:
If a company’s stock suddenly starts increasing, EMA will catch it faster
than SMA.
AUTOREGRESSIVE (AR) AND MOVING
AVERAGE (MA) MODELS
• ✅ Autoregressive Model (AR) – Using Past Values to Predict Future
• The AR model predicts the next value based on previous values.
• It assumes that today’s value depends on yesterday’s values.
• 🔹 Example:
Imagine predicting today’s temperature based on the last 3 days:
• Today′sTemperature=0.6×(Yesterday′s)+0.3×(DaybeforeYesterday)
+0.1×(ThreeDaysAgo)
• If past temperatures were 28°C, 30°C, and 32°C, then:
• PredictedTemp=(0.6×30)+(0.3×28)+(0.1×32)=29.8°
• Let’s assume:
• Yesterday’s Sales (Y_{t-1}) = 100 units
• Yesterday’s Error (e_{t-1}) = 5 units
• Today’s Random Error (e_t) = 2 units
• Now, using the formula:
• Yt=(0.7×100)+(0.3×5)+2
• = 70+1.5+2=73.5 units
• Predicted Sales for Today = 73.5 units
BREAKDOWN OF THE FORMULA:
• Imagine you are tracking the stock price of a company. Every day, the price
goes up and down, but you want to see the overall trend.
• 📌 SMA (Simple Moving Average)
• Takes the average of the last N days (e.g., 10 days).
• If the stock prices for 5 days are 100, 102, 98, 101, and 105, then the 3-day SMA
is: (100+102+98)/3=100
• The next day, it shifts forward and calculates a new average.
EXAMPLE 1: STOCK MARKET
• The temperature of a city fluctuates daily, but SMA and EMA can help
predict trends (e.g., seasonal changes).
• SMA: A 7-day SMA smooths daily fluctuations and shows the weekly trend.
• EMA: If today’s temperature suddenly rises due to a heatwave, EMA reacts
faster than SMA.
AR MODEL – USING PAST VALUES TO
PREDICT FUTURE VALUES
• What Does It Do?
• Uses past data points to predict the next value.
• Assumes that what happened in the past influences the future.
EXAMPLE 1: HOUSE PRICE PREDICTION
• Simple Explanation:
• It converts a time-based signal (something that changes over time) into a
frequency-based signal (how often something repeats).
• It helps us identify hidden cycles in data.
• Think of it like turning a song into individual musical notes. Instead of
hearing a song as a mix of sounds, you break it down into different beats and
tones.
REAL-LIFE EXAMPLE 1: WEATHER
PATTERNS
• Imagine you are tracking the temperature of a city over a year.
• Looking at daily temperature data, you see ups and downs, but it’s hard to
see any clear pattern.
• The Fourier Transform helps reveal seasonal cycles:
• A 1-year cycle (summer, winter, etc.).
• A weekly cycle (weekend temperature shifts).
• A daily cycle (temperature is lower at night, higher in the day).
• 📌 How It Helps:
• We can predict temperature for future days or weeks based on these
patterns.
REAL-LIFE EXAMPLE 2: STOCK MARKET
ANALYSIS
• Stock prices move up and down daily, making it hard to see trends.
• Fourier Transform can identify hidden cycles like:
• Weekly trends (Monday dips, Friday spikes).
• Quarterly earnings cycles.
• Long-term market trends.
• 📌 How It Helps:
• Traders use Fourier analysis to identify patterns and make better investment
decisions.
REAL-LIFE EXAMPLE 3: IOT SENSOR
DATA (SMART HOME DEVICES)
• Your smart home temperature sensor records data every minute.
• You want to understand how often the air conditioning turns on/off.
• Fourier Transform can find patterns in temperature changes and predict
when the AC should turn on.
• 📌 How It Helps:
• Helps optimize energy consumption by detecting unnecessary power usage.
FREQUENCY DOMAIN ANALYSIS: WHY
DO WE NEED IT?
• Simple Explanation:
• Normally, we analyze time series in the time domain (looking at values over time).
• Frequency domain analysis helps us see cycles and repeating patterns more
clearly.
• Think of a car engine:
• Instead of looking at the engine’s speed over time, we analyze its vibration
frequency to find if something is wrong.
REAL-LIFE EXAMPLE 5: MUSIC & SOUND
PROCESSING
• A song is a mixture of different frequencies (bass, vocals, instruments).
• Fourier Transform breaks down music into different sound frequencies.
• This is how equalizers adjust bass and treble in music apps.
• 📌 How It Helps:
• Improves sound quality and allows noise removal in audio recordings.
FINAL SUMMARY IN SIMPLE WORDS
Machine vibration
Helps identify hidden
Frequency Domain Analysis analysis, Music
repeating cycles
equalization
WHY DO LINEAR MODELS FAIL IN TIME
SERIES ANALYSIS?
• Linear models (like AR, MA, ARMA, ARIMA) are great for simple patterns, but
they often fail when time series data becomes too complex.
• There are three main reasons why linear models fail:
1️⃣Chaos (Unpredictable changes)
2️⃣Non-Stationarity (Changing patterns over time)
3️⃣Non-Linearity (Relationships that are not straight-line)
• Let’s explain these in simple words with real-life examples.
CHAOS – WHEN SMALL CHANGES LEAD
TO BIG EFFECTS
• 📌 Simple Explanation:
• Some systems are so sensitive that a tiny change today can cause a huge difference in the future.
• This is called the "Butterfly Effect" (a butterfly flapping its wings in one place could cause a storm
somewhere else).
• 🟢 Real-Life Example: Stock Market 📈
• Imagine you have a simple stock price prediction model based on past prices.
• But unexpected news (e.g., a new government policy, a big company's bankruptcy) can suddenly
change everything.
• A linear model assumes the past determines the future, but in chaotic systems like the stock
market, tiny events can cause big swings.
• 📌 Why Linear Models Fail?
• Linear models assume a predictable pattern, but chaotic systems don’t follow fixed rules.
NON-STATIONARITY – WHEN DATA
PATTERNS KEEP CHANGING
• 📌 Simple Explanation:
• If a time series has a changing trend, it is called non-stationary.
• Linear models assume data has a constant average and variance, but real-world data often
changes over time.
• 🟢 Real-Life Example: Weather Forecasting
• Imagine you are trying to predict rainfall based on past data.
• If you live in a place where climate change is affecting rainfall, past data won’t match future
trends.
• A linear model assumes the weather follows past trends, but in reality, climate patterns
shift over decades.
• 📌 Why Linear Models Fail?
• Linear models assume the data is stable over time, but in real life, patterns evolve.
NON-LINEARITY – WHEN RELATIONSHIPS
ARE NOT STRAIGHT-LINE
• 📌 Simple Explanation:
• Linear models assume that one thing affects another in a straight-line way.
• But in many cases, small changes in one factor can lead to unexpected, large effects.
• 🟢 Real-Life Example: Traffic Prediction 🚗
• Suppose you are predicting how long it takes to get home from work based on past traffic data.
• A linear model might assume:
• "More cars on the road → More traffic → Longer travel time."
Solution
Real-Life
Problem What Happens? (Nonlinear
Example
Model)
Simple models Alexa/Siri voice
Long-Term Memory RNN, LSTMs
forget past patterns commands 🎤
Chaos Theory Weather
Small changes
Chaos & Unpredictability (Lyapunov, Forecasting ,
cause big effects
Attractors) Heartbeats ❤️
Small patterns
Fractals & Self- Stock Market
Self-Similarity repeat in big
Similarity Trends 📈
patterns
UNDERSTANDING RNN AND LSTM WITH
REAL-LIFE EXAMPLES
• Time series data often requires models that can remember past information
to make accurate predictions. Traditional models like ARIMA can handle simple
patterns, but struggle when long-term memory is needed.
• 👉 Solution? Recurrent Neural Networks (RNNs) and Long Short-Term
Memory (LSTMs)!
WHAT IS AN RNN (RECURRENT NEURAL
NETWORK)?
• 📌 Simple Explanation:
• Imagine you’re reading a book 📖. Each new sentence makes sense only because
you remember the previous ones.
• Similarly, RNNs process data one step at a time, remembering past information to
improve predictions.
• Unlike regular neural networks, RNNs have loops to keep information from previous
steps.
• 🔵 Real-Life Example: Predicting the Next Word in a Sentence
• If I say, "The cat sat on the ...", you can easily predict "mat" because you remember
the earlier words.
• RNNs work the same way! They use past words to predict the next one.
WHAT IS AN RNN (RECURRENT NEURAL
NETWORK)?
• 🔹 How RNN Works Step by Step?
• 1️⃣Takes input data (current step) ➝ Example: A word in a sentence.
2️⃣Processes it with memory (previous steps) ➝ Remembers past words.
3️⃣Outputs prediction ➝ Suggests the next word.
4️⃣Loops back to step 1 with updated memory.
• 🟢 Where is RNN Used?
• Speech Recognition (Google Assistant, Alexa)
• Language Translation 🌎 (Google Translate)
• Stock Price Prediction 📈 (Learning from past trends)
PROBLEM WITH RNN: SHORT-TERM
MEMORY LOSS 🧠
• 📌 RNNs are great but forget long-term dependencies!
• If a book is 500 pages long, you might forget what happened on page 5 while
reading page 400.
• Similarly, RNNs struggle when long-term memory is needed.
• This is called the vanishing gradient problem – old information fades away as
the network processes more data.
• 👉 Solution? LSTMs! 🚀
WHAT IS LSTM (LONG SHORT-TERM
MEMORY)?
• 📌 Simple Explanation:
• LSTMs are a smarter version of RNNs.
• They use special memory cells to remember important information and
forget unimportant details.
• This helps them handle long-term dependencies better.
• 🔵 Real-Life Example: Watching a TV Show 🎬
• If you’re watching a long series like Game of Thrones, you might forget small
details from Episode 1, but remember key plot twists that affect the ending.
• LSTMs do the same! They store important events and forget less important
ones.
HOW LSTM WORKS?
• ✅ For simple sequences? Use RNN. (e.g., predicting next word in a sentence)
✅ For long-term memory? Use LSTM. (e.g., chatbot remembering long
conversations)
• 👉 Want to go even further? 🚀 Next step is Transformers (like GPT-4),
which outperform LSTMs for many tasks!
RNN and LSTM - Jupyter Notebook
WHAT DOES THIS CODE DO?
• ✅ RNN works well for short-term memory but loses accuracy over longer
time steps.
✅ LSTM performs better for long-term dependencies due to its memory
cell mechanism.
CHAOS THEORY IN TIME SERIES
ANALYSIS
• Chaos theory helps us understand complex, unpredictable systems that
seem random but actually follow underlying rules. It is used to analyze
systems where small changes in the initial conditions can lead to huge
differences in outcomes—a concept popularly known as the Butterfly
Effect.
• For example:
🔹 A tiny change in weather conditions today can cause a storm next week.
🔹 A small difference in stock market data can lead to huge price fluctuations
over time.
KEY CONCEPTS IN CHAOS THEORY
• Lyapunov Exponent (Predictability of a System)
• The Lyapunov Exponent measures how quickly two nearly identical states in a system diverge over time.
• 📌 Interpretation:
• Positive Lyapunov Exponent (> 0): The system is chaotic—small differences in initial conditions grow exponentially.
Example: Weather changes, Stock market
• Zero Lyapunov Exponent (≈ 0): The system is on the edge of chaos—patterns exist but can break down.
• Negative Lyapunov Exponent (< 0): The system is stable—small changes fade away, and the system remains
predictable. Example: Simple pendulum without friction
• 📌 Real-life Example:
Consider two stock market investors starting with almost the same amount of money but making slightly
different investment choices. Over time, their portfolios may diverge drastically—one may become a
millionaire while the other goes bankrupt! This is chaos in action.
• Python Code to Calculate Lyapunov Exponent - Jupyter Notebook
ATTRACTORS (WHERE THE SYSTEM
MOVES TOWARDS)
• An attractor is a state where a chaotic system eventually settles down or
revisits frequently. There are three types of attractors:
• 📌 Real-life Example: Strange Attractor (Lorenz System) 🔹 The weather
system does not repeat but follows a specific pattern—it never settles into a
stable cycle but remains within certain limits.
🔹 The stock market fluctuates without following an exact pattern but still
stays within an upper and lower bound. Type Description Example
The system settles A pendulum at
Fixed-point attractor
at a stable value. rest.
A heartbeat or
The system moves seasonal
Limit cycle attractor
in a repeating cycle. weather
changes.
The system never Weather, stock
repeats exactly but market,
Strange attractor
stays within a turbulence in
range. airflows.
Python Code to Visualize the Lorenz Attractor -
Jupyter Notebook
SUMMARY
• A fractal is a pattern that repeats infinitely at different levels of magnification. This means
that if you zoom in or zoom out, the pattern still looks similar.
• 📌 Real-life Examples of Fractals:
• Coastlines → No matter how much you zoom in on a coastline, the curves and edges look similar at
different levels.
• Snowflakes ❄️→ Each branch of a snowflake looks like the whole snowflake.
• Clouds & Mountains → No matter how closely you look, the shape always appears rough and irregular.
• Stock Market Trends 📉 → If you observe stock prices on different time scales (seconds, minutes, hours,
or days), they often show similar ups and downs.
• 📌 Mathematical Example:
One famous fractal is the Mandelbrot Set, where complex equations generate infinite
repeating patterns.
WHAT IS SELF-SIMILARITY?
• Self-similarity means that small parts of a system resemble the entire system.
• 📌 Real-life Examples of Self-Similarity:
• Branches of a Tree 🌳 → A small branch looks like the entire tree.
• Blood Vessels in the Human Body 🩸 → The branching pattern of veins follows the same
structure at different levels.
• River Networks 🌊 → A small tributary looks like a miniature version of the whole river
system.
• 👉 In Time Series Analysis, self-similarity means that patterns in the data repeat
at different time scales.
• Example: Stock market data → A small 1-hour fluctuation in stock prices may look
similar to aa1-month
Generating trend. (Sierpiński Triangle) - Jupyter Notebook
Simple Fractal
HOW ARE FRACTALS USED IN TIME
SERIES ANALYSIS?
• Fractals help us identify hidden patterns in chaotic time series data by
detecting self-similarity.
• 📌 Applications of Fractals in Time Series Analysis:
✔ Stock Market Prediction → Stock prices exhibit self-similarity, helping
traders analyze trends.
✔ Weather Forecasting → Cloud formations and temperature changes follow
fractal patterns.
✔ Biological Data Analysis → Heartbeats and brain waves show fractal
behavior.
SUMMARY
• ✔ Fractals → Patterns that repeat at different scales (e.g., trees, stock prices,
coastlines).
✔ Self-Similarity → Small parts of a system resemble the whole (e.g., river
networks, blood vessels).
✔ Used in Time Series Analysis → Helps in detecting patterns in finance,
weather, and biological systems.
✔ Python can generate fractals → Example: Sierpiński Triangle.