Untitled Presentation
Untitled Presentation
Vectors in AI
Students name: rawan nawfal hashim
Dept: p&o eng.
A vector in AI typically refers to an ordered array of numbers, often representing a point or direction in multi-dimensional space. These
can be:
● Interpretability: Vector representations, especially learned ones, are often hard to interpret.
● Bias: Vectors can encode and perpetuate biases in data (especially in NLP embeddings).
🔹 1. Dot Product (Inner Product)
● Formula:
● Use in AI :
● Measure similarity between vectors (e.g. cosine similarity )
● Used in neural work ( e.g. neuron activation)
● Use in AI:
● L2 Norm (Euclidean):
● L1 Norm (Manhattan):
● Use in AI:
● Regularization to avoid overfitting
● Normalize input vectors
🔹 3. Cosine Similarity
● Formula:
● Use in AI:
● Text similarity, image retrieval , clustering
● Works well with sparse-high dimensional data
🔹 4. Matrix-Vector Multiplication
● Formula:
● Use in AI :
● Core operation in forward propagation in neural network
● Transform data into different feature spaces
🔹 5. Gradient (Vector of Partial Derivatives)
● Formula:
● Use in AI :
● Optimization of loss function vs gradient descent
● Used backpropagation to update weights in neural network
💾 Vectors as Dynamic Arrays (in data structures)
● In languages like C++, a vector is a dynamic array—a resizable array from the Standard Template
Library (STL).
● Efficient for :
● Index-based access
Conclusion / summary