Know Thy Frenemy
Know Thy Frenemy
Understanding LLMs –
Past, Present, and
Future
Barak Shoshany
Department of Physics, Brock University
Motivation
• Many professors and students use LLMs.
• Other talks focus on implications of LLMs.
• This talk focuses on LLMs themselves.
• Understanding LLMs better will help professors:
• Optimize use.
• Dispel misconceptions.
• Develop situational awareness.
• Incorporate in courses.
• Instruct students on proper use.
Part I: Past
Neural networks 1/5
• Curve fitting: Find function that
approximates data.
• Minimize error.
• Example: Approximate by
polynomial.
• More parameters = better fit.
Neural networks 2/5
• Neural network: “Extremely
sophisticated curve fitting.”
• Mathematical model of brain
(≈100 billion neurons).
• Width: # of neurons per layer.
• Depth: # of hidden layers.
• Deep network: multiple hidden
layers.
• Deeper layer = more abstract.
Neural networks 3/5
• Each connection has a weight
(≈importance).
• Each neuron does a weighted
sum of previous neurons.
• Result passed through non-
linear activation function.
• Universal approximation
theorem: Any function can be
approximated by a neural
network with enough neurons.
Neural networks 4/5
• Example – image to text (simplified):
• Input layer = pixels.
• Early hidden layers = edges, orientation, colors.
• Middle hidden layers = textures, motifs.
• Late hidden layers = objects, scene context.
• Output layer = text description of image.
Any questions?