Introduction
Introduction
Introduction
Fall 2024
The University of Jordan
Dr. Tamam AlSarhan
Content
I. Introduction
II. Neural Nets As Universal Approximators
III. Training a neural Network –Part 1 (The Problem of Learning)
IV. Training a neural Network- Part 2 (Gradient Descent, Training the Network)
V. Training a neural Network- Part 3 (Backpropagation, Calculus of Backpropagation)
VI. Training a neural Network- Part 4 ( Loss functions, regularizes, Dropout, ….)
VII. Convolutional Neural Networks (CNNs)
VIII. Recurrent Neural Networks (RNNs)
IX. Transformers and Attention
X. Generative Adversarial Networks (GANs)
XI. Large Language Models (LLM)
About this course
• Introduction to deep learning
• basics of ML assumed
• mostly high-school math
• Layers
• Input data and targets
• Loss function
• Optimizer
Layers
• Data processing modules
• Many different kinds exist
• densely connected
• convolutional
• recurrent
• pooling, flattening, merging, normalization, etc.
• Input: one or more tensors
output: one or more tensors
• Usually have a state, encoded as weights
• learned, initially random
• When combined, form a network or
a model
Neural Networks