0% found this document useful (0 votes)
33 views21 pages

CENG3300 Lecture 2-1

This document provides an overview of key concepts in linear algebra and calculus that are important for data science and molecular engineering. It introduces scalars, vectors, matrices, and tensors. It covers vector and matrix operations like transposition, inner and outer products, and norms. It also discusses matrix types, determinants, inverses, and using matrices to represent linear systems of equations. The document then explains concepts in calculus like derivatives, partial derivatives, and optimization of smooth functions using gradient descent. It notes the use of Jupyter notebooks for Python programming.

Uploaded by

huichloemail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
33 views21 pages

CENG3300 Lecture 2-1

This document provides an overview of key concepts in linear algebra and calculus that are important for data science and molecular engineering. It introduces scalars, vectors, matrices, and tensors. It covers vector and matrix operations like transposition, inner and outer products, and norms. It also discusses matrix types, determinants, inverses, and using matrices to represent linear systems of equations. The document then explains concepts in calculus like derivatives, partial derivatives, and optimization of smooth functions using gradient descent. It notes the use of Jupyter notebooks for Python programming.

Uploaded by

huichloemail
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Data Science for Molecular

Engineering
Lecture 2

1
ILO
• To be familiar with representation and math operations in linear
algebra
• To know the concept of derivative, the derivatives of common
functions and the chain rule for differentiation
• To understand the concept of gradient descent, its applications and
limitations

2
Survey 1 – go through the following
problems
1.

2.

3.

4.

3
Scalar, vector, matrix and tensor
Scalar

Vector

Matrix

Tensor

4
Vector
• Column and row vectors
• Examples

5
Vector
• Transpose

• Inner (and outer) product of two vectors

Inner product Outer product

Cosine similarity 6
Vector
• Norm of a vector

• 1-Norm, 2-Norm, infinity-norm

7
Matrix
• Special types of matrix
• Diagonal, upper-triangular, lower-triangular, identity, symmetric

• Transpose

8
Matrix
• Math manipulations
• Addition
• Subtraction
• Scalar multiplication
• Matrix multiplication

9
Matrix
• Determinant

• Inverse

10
Matrices and vectors can be used to
conveniently represent linear equation systems
Overall score = 20% test1 + 20% test 2 +20%
midterm + 10% quizzes + 30% final

OS = ? (in vector form?)

11
Useful reference
• http://www.cs.cmu.edu/~zkolter/course/linalg/linalg_notes.pdf
• https://pages.cs.wisc.edu/~amos/412/lecture-notes/lecture14.pdf

12
(Differential) Calculus
• Survey 2 – complete the poll after you have gone through the
following problems
Calculate the derivatives of the following functions:
2
𝑦 =𝑥 +1
1
𝑦= −𝑥
1+𝑒

𝑦 =|𝑥|

f(x, y, z) = x cos z + x2y3ez , calculate

13
Derivative
• Continuous, differentiable, smooth
• Derivative measures the function value (output variable, y) change
with respect to an infinitesimal change of the argument (input
variable, x).

14
Derivatives of common functions

15
https://www.mathsisfun.com/calculus/derivatives-rules.html
Partial derivatives
The derivative with respect to
one variable, while holding all
other variables constant.

16
Mathematical Optimization (of a smooth
function)
Some basic concepts Example

Objective
min 𝑦=𝑥 − 1

s.t. Decision variables


s.t.
Constraints
……

For univariate unconstrained optimization,


at a local minimum,

17
Numerical optimization
• Gradient
• The direction where the function value increases the fastest
𝜕𝑓 𝜕𝑓
𝑔𝑟𝑎𝑑 𝑓 ( 𝑥 , 𝑦 ) =∇ 𝑓 ( 𝑥 , 𝑦 )=( , )
𝜕𝑥 𝜕 𝑦

• Step size
• The distance to move along the negative gradient direction (for minimization)

Q1: How do we know we are close to a local


minimum?
Q2: What if the step size we choose is two small? Or
too large?

18
Python Programming
• Jupyter notebook

19
• Install Jupyter Notebook before Friday’s lecture

• https://www.dataquest.io/blog/jupyter-notebook-tutorial/

20
21

You might also like