0% found this document useful (0 votes)
38 views

Time Complexity Analysis

This document discusses numerical linear algebra and parallel computing. It introduces motivation for studying these topics, including network analysis of social and biological networks, linear programming, and machine learning applications involving feature selection and natural language processing. However, real-world data sets are often too large for standard techniques. Sparse matrices offer a solution since many of their entries are zero. The document then covers the history of sparse matrices, provides an overview of topics to be covered including sparse matrix representations and algorithms, and analyzes algorithmic complexity using Big-O notation and the Master Theorem.

Uploaded by

ADEDE Ezéchiel
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views

Time Complexity Analysis

This document discusses numerical linear algebra and parallel computing. It introduces motivation for studying these topics, including network analysis of social and biological networks, linear programming, and machine learning applications involving feature selection and natural language processing. However, real-world data sets are often too large for standard techniques. Sparse matrices offer a solution since many of their entries are zero. The document then covers the history of sparse matrices, provides an overview of topics to be covered including sparse matrix representations and algorithms, and analyzes algorithmic complexity using Big-O notation and the Master Theorem.

Uploaded by

ADEDE Ezéchiel
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 30

Numerical Linear Algebra &

Parallel Computing

Mohammed VI Polytechnic University

Nouredine Ouhaddou Mohamed Jalal Maaouni Ibrahim El Mountasser


Introduction
 Motivation

1. Network Analysis:
1. Social Networks
2. Biological Networks
2. Linear Programming:
3. Machine Learning:
1. Feature selection
2. Natural Language Processing

2
Introduction
 Motivation

1. Network Analysis:
1. Social Networks
2. Biological Networks
2. Linear Programming:
3. Machine Learning:
1. Feature selection
2. Natural Language Processing

Data too large !

3
Introduction
 Motivation

1. Network Analysis:
1. Social Networks
2. Biological Networks
2. Linear Programming:
3. Machine Learning:
1. Feature selection
2. Natural Language Processing

Data too large !


But many entries are zeros.

4
Introduction
 History overview

 The term "sparse matrix" first


used in the 1950s.
 George Forsythe is a pioneer in
the field with his development
of Gaussian elimination for
sparse matrices in the 1970s.
 More advances have been
made since with sophisticated
algorithms and techniques for
working with sparse matrices,
including iterative and direct
methods.

George Forsythe
5
Introduction
 Course overview

 Preliminaries:
 Complexity
 Memory
 Dense Matrices
 BLAS LAPACK
 Sparse Matrices as a data
structure:
 Representation
 Storage/ format
 Working with sparces matrices:
 Basic Operations
 Algorithms for sparce matrices
 Direct/ Iterative methods.
 Eigenvalue Problems

6
Complexity Analysis
Measuring Efficiency
 Problem:
Given an integer n, count the number of its divisors.

Solution 1: Solution 2:

8
Measuring Efficiency
 Solution 1:

9
Measuring Efficiency
 Solution 2:

10
Measuring Efficiency
 We will run the two programs for different values of n and
measure which algorithm is faster

n Solution 1 Solution 2

10 seconds seconds

1000 seconds seconds

10000000 seconds seconds

11
Measuring Efficiency
 What is the problem with this approach?
 Machine dependence

n Solution 1 Solution 2

10 operations operations

1000 operations operations

10000000 operations operations

12
Measuring Efficiency
 Let’s generalize for any n

Solution 1 Solution 2

T(n) = n operations T(n) = operations

 What if
 What happens when n is very large?
 T(1000) =
 T(1000) =

13
Measuring Efficiency

14
Big-O notation
 English definition:
 is a mathematical notation that describes the limiting behavior of a
function when the argument tends towards a particular value or
infinity.
 Formal definition:

 A: if eventually for all sufficiently large n, T(n) is bounded


above by a constant multiple of f(n)

15
Big-O notation
 Formal definition:

 A: if eventually for all sufficiently large n, T(n) is bounded


above by a constant multiple of f(n)

16
Big-O notation, Example #1

17
Big-O notation, Example #2

(
𝒌
𝑪𝒍𝒂𝒊𝒎:∀𝒌≥𝟏,𝒏 𝒊𝒔𝒏𝒐𝒕𝑶 𝒏 ) 𝒌−𝟏
18
Case study: merge sort algorithm
 Why study merge sort?
 Good introduction to recursion, divide and conquer and master
theorem.

 The sorting problem:


 input: array of n numbers

 output: the same array sorted in increasing order.

19
Case study: merge sort algorithm
 Example: [5, 3, 2, 7, 1, 6, 4, 8]

20
Case study: merge sort algorithm
 Pseudocode for the merge step:

C = the output array, lenght = n


A = 1st sorted array, lenght = n/2
B = 2nd sorted array, lenght = n/2
i = 1, j = 1
for k = 1 to n
if A[i] < B[j]:
C[k] = A[i]
i+=1
else
C[k] = B[j]
j+=1

21
Case study: merge sort algorithm
 the divide phase:

22
The Master Method

23
The Master Method
 Example 1: Merge sort

24
The Master Method
 Example 2: Binary search

25
The Master Method
 Intuition for the 3 cases:
 for a level j the amount of work is:

26
The Master Method: Proof case 1
The work for a level j
The total work

27
Recap: basic sums fact

 , CONSTANT

28
The Master Method: Proof case 2
The work for a level j
The total work

29
The Master Method: Proof case 3
The work for a level j
The total work

30

You might also like