ML Module 5

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

ML Module 5

I have my exam for the course Machine Learning.

Good luck with your Machine Learning exam! Let me know if you need help reviewing any topics,
concepts, or practice problems. 😊

Help me prepare.

Sure! Let's break down your preparation into manageable steps. Could you share the specific topics or
chapters your Machine Learning course covers? Here's a general approach to get started:

1. Core Concepts Review

Supervised Learning: Linear Regression, Logistic Regression, Decision Trees, Support Vector
Machines, k-NN.
Unsupervised Learning: Clustering (k-Means, Hierarchical), Dimensionality Reduction (PCA).
Reinforcement Learning: Basics of Markov Decision Processes (MDP), Q-Learning.
Evaluation Metrics: Accuracy, Precision, Recall, F1 Score, ROC-AUC.

2. Mathematical Foundations

Linear Algebra: Matrices, Eigenvalues, Eigenvectors (for PCA).


Calculus: Gradients, Partial Derivatives (Gradient Descent).
Probability and Statistics: Bayes' Theorem, Probability Distributions, Hypothesis Testing.

3. Algorithms and Applications

Common algorithms like Naive Bayes, Neural Networks (Basics), Ensemble Methods (Bagging,
Boosting).
Real-world applications such as spam filtering, recommendation systems, etc.

4. Hands-On Practice

Write and test code snippets for ML models using Python libraries like `sklearn`, `TensorFlow`, or
`PyTorch`.
Work on datasets (e.g., Iris, Titanic, MNIST).

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 1/15
5. Revision and Testing

Focus on past exam questions or assignments.


Review important topics with flashcards.
Test yourself with sample problems.

Would you like detailed explanations for specific topics, help with coding examples, or practice
questions?

Explain Density Based Clustering DBSCAN algorithm. Explain in detail with examples but in simple
language.

DBSCAN (Density-Based Spatial Clustering of Applications with Noise)

DBSCAN is a clustering algorithm that groups points into clusters based on their density in the data
space. Unlike k-Means, it does not require the number of clusters to be specified beforehand and is
capable of identifying clusters of arbitrary shapes. It can also identify noise points (outliers).

Key Concepts in DBSCAN

1. Core Points
A point is a core point if at least `minPts` other points are within a radius `eps` from it.
This means the point is in a dense region.
2. Border Points
A point is a border point if it is not a core point but lies within `eps` of a core point.
It belongs to a cluster but does not have enough neighboring points to be a core point itself.
3. Noise Points
A point is considered noise if it is neither a core point nor a border point.
It does not belong to any cluster.
4. Parameters
`eps`:The radius within which points are considered neighbors.
`minPts`: The minimum number of points required to form a dense region.

Steps in DBSCAN Algorithm

1. Select a Point: Start with an arbitrary point in the dataset.


If the point is not visited, check if it is a core point (has at least `minPts` points within `eps`).
2. Form a Cluster:
If it is a core point, form a new cluster by adding all its density-reachable points (points
reachable from the core point via other core points).
3. Expand the Cluster:

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 2/15
For each new point added to the cluster, repeat the process to include its density-reachable
neighbors.
4. Mark Noise:
If a point is neither a core point nor density-reachable from any cluster, mark it as noise.

Example: Understanding DBSCAN

Dataset Example

Consider the following points in a 2D space:

plaintext

P1(1, 2), P2(2, 2), P3(2, 3), P4(8, 7), P5(8, 8), P6(25, 80)

Parameters

`eps = 2`: A point's neighborhood is within a radius of 2 units.


`minPts = 3`: At least 3 points are required to form a dense region.

Process

1. Start with P1:


Neighborhood of P1: {P1, P2, P3} (within `eps = 2`).
Since the neighborhood contains 3 points (≥ `minPts`), P1 is a core point.
Form a cluster: {P1, P2, P3}.
2. Expand Cluster:
Check neighbors of P2 and P3. They do not add any new points to the cluster.
3. Move to P4:
Neighborhood of P4: {P4, P5} (within `eps = 2`).
Since the neighborhood has only 2 points (< `minPts`), P4 is not a core point.
4. Move to P6:
Neighborhood of P6: {P6} (no points within `eps = 2`).
P6 is a noise point.

Final Result

Clusters:
Cluster 1: {P1, P2, P3}.
Cluster 2: {P4, P5}.
Noise: {P6}.

Advantages of DBSCAN

1. No Need for Predefined Number of Clusters: Unlike k-Means, DBSCAN automatically determines
the number of clusters based on density.

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 3/15
2. Handles Arbitrary Shapes: Can form clusters of different shapes (e.g., circular, elongated).
3. Detects Noise: Labels outliers as noise points.

Limitations of DBSCAN

1. Choosing Parameters: The results heavily depend on `eps` and `minPts`. Choosing poor values
can lead to bad clustering.
2. Sensitive to Density Variations: Struggles with datasets where clusters have different densities.

Python Code Example

Here’s a Python implementation using `sklearn`:

python

from sklearn.cluster import DBSCAN


import numpy as np

# Example dataset
X = np.array([[1, 2], [2, 2], [2, 3], [8, 7], [8, 8], [25, 80]])

# DBSCAN model
dbscan = DBSCAN(eps=2, min_samples=3)
labels = dbscan.fit_predict(X)

# Results
print("Cluster labels:", labels)

`labels` output might look like:

plaintext

[0, 0, 0, 1, 1, -1]

Here, `-1` indicates a noise point.

Does this explanation clarify DBSCAN for you? 😊

Explain clustering with overview of distance metrics and major clustering approaches. Explain in
detail with examples but in simple language.

Clustering Overview

Clustering is a type of unsupervised machine learning technique where data points are grouped into
clusters (or groups) based on some similarity measure. The goal is to organize data into groups such
that points in the same cluster are more similar to each other than to those in other clusters. Clustering
is widely used in applications like market segmentation, image compression, and pattern recognition.
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 4/15
Types of Clustering Approaches

1. Partitioning Clustering

The data is divided into distinct, non-overlapping clusters.


Example: k-Means, where data is divided into `k` clusters.

2. Hierarchical Clustering

Builds a tree-like structure (called a dendrogram) to represent the data hierarchy.


Can be agglomerative (bottom-up) or divisive (top-down).
Example: Agglomerative Hierarchical Clustering.

3. Density-Based Clustering

Groups points that are closely packed together, marking points in low-density regions as noise.
Example: DBSCAN (Density-Based Spatial Clustering of Applications with Noise).

4. Grid-Based Clustering

Divides the data space into a grid and groups data based on the grid structure.
Example: STING (Statistical Information Grid).

5. Model-Based Clustering

Assumes that the data is generated from a mixture of probability distributions and tries to find the
best model to explain the data.
Example: Gaussian Mixture Models (GMM).

Distance Metrics Used in Clustering

Clustering relies on distance metrics to measure the similarity between data points. Here are some
common distance metrics:

1. Euclidean Distance

Formula:

d= (x1 − x2 )2 + (y1 − y2 )2
​ ​ ​ ​ ​

Description: Measures the straight-line distance between two points in a multi-dimensional space.
It's the most commonly used distance metric.
Example:
For points P1(1, 2) and P2(4, 6),

d= (1 − 4)2 + (2 − 6)2 =​
9 + 16 = ​
25 = 5

2. Manhattan Distance (City Block)

Formula:
Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 5/15
d = ∣x1 − x2 ∣ + ∣y1 − y2 ∣
​ ​ ​ ​

Description: Measures the distance between two points if only vertical and horizontal moves are
allowed (like navigating through city blocks).
Example:
For points P1(1, 2) and P2(4, 6),

d = ∣1 − 4∣ + ∣2 − 6∣ = 3 + 4 = 7

3. Cosine Similarity

Formula:

A⋅B
cosine similarity =
∥A∥∥B∥

where `A` and `B` are vectors of data points.

Description: Measures the cosine of the angle between two vectors. A value close to 1 means they
are very similar.
Example:
If A = [1, 0] and B = [0, 1],

(1 × 0) + (0 × 1)
cosine similarity = =0
12 + 02 × 02 + 12

4. Hamming Distance

Formula:

d = count of differing positions in strings or binary sequences

Description: Measures the number of differing characters between two strings or binary
sequences.
Example:
For binary strings `10101` and `10011`,

d = 2 (positions 3 and 4 are different)

Major Clustering Algorithms

1. k-Means Clustering

Approach: Divides data into `k` clusters by minimizing the within-cluster variance. Each cluster is
represented by its centroid (mean).
Steps:
1. Choose `k` initial centroids randomly.
2. Assign each point to the nearest centroid.
3. Recalculate centroids as the mean of the points in each cluster.
4. Repeat steps 2 and 3 until convergence.
Example:
Let's say we have 6 points:

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 6/15
P 1(1, 2), P 2(2, 3), P 3(3, 3), P 4(8, 8), P 5(9, 9), P 6(10, 10)

After running k-Means with `k=2`, the algorithm would cluster the points into two groups: one
containing points close to (1, 2) and the other close to (8, 8).

2. DBSCAN (Density-Based Spatial Clustering of Applications with Noise)

Approach: Groups points that are closely packed together and marks outliers as noise. Does not
require `k` to be specified.
Key Parameters:
`eps`: Radius within which points are considered neighbors.
`minPts`: Minimum number of points required to form a dense region (core point).
Example:
For a dataset with points like (1, 2), (2, 2), (2, 3), and others, DBSCAN will form clusters based on the
density of nearby points and mark any points far from dense regions as noise.

3. Hierarchical Clustering

Approach: Builds a hierarchy of clusters either by starting with individual points (agglomerative) or
one big cluster (divisive).
Agglomerative (Bottom-Up): Starts with each point as its own cluster and merges the closest
clusters until only one remains.
Divisive (Top-Down): Starts with all points in a single cluster and splits them into smaller clusters.
Example:
Given points (1, 2), (2, 2), (3, 3), and (8, 8), agglomerative hierarchical clustering will start by
merging the closest points and continue until it has formed a complete cluster tree.

4. Gaussian Mixture Models (GMM)

Approach: Assumes that the data is generated from a mixture of multiple Gaussian distributions.
Each cluster is represented by a Gaussian distribution, and data points are assigned to clusters
based on their probability of belonging to each Gaussian.
Steps:
1. Estimate the parameters (mean and variance) of the Gaussians.
2. Assign points to clusters based on these parameters.
3. Iterate to refine the estimates of the parameters and assignments.
Example:
For a dataset where data points come from two Gaussian distributions (one centered at (2, 3) and
another at (8, 8)), GMM would identify and cluster the points based on these distributions.

Example of Clustering with Python (k-Means)

python

from sklearn.cluster import KMeans


import numpy as np

# Example dataset
X = np.array([[1, 2], [2, 2], [2, 3], [8, 7], [8, 8], [25, 80]])

# Create a KMeans model with 2 clusters


kmeans = KMeans(n_clusters=2)
kmeans.fit(X)

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 7/15
# Predicted cluster labels for each point
print("Cluster labels:", kmeans.labels_)

Output:

plaintext

Cluster labels: [0 0 0 1 1 1]

In this example, k-Means has grouped the points into two clusters, where the first three points (1, 2), (2,
2), (2, 3) belong to one cluster (labeled `0`), and the last three points (8, 7), (8, 8), (25, 80) belong to
another cluster (labeled `1`).

Conclusion

Clustering helps in finding patterns and grouping similar data. Different clustering algorithms are suited
for different types of data and requirements. Understanding how to choose the right algorithm based
on the data's nature and the clustering objective is crucial.

Would you like to dive deeper into any of these clustering approaches or metrics?

Explain Graph Based clustering: Clustering with minimum spanning tree. Explain in detail with
examples but in simple language.

Graph-Based Clustering: Clustering with Minimum Spanning Tree (MST)

Graph-based clustering approaches focus on treating data as a graph where data points (or objects) are
represented as nodes, and the edges between nodes reflect some similarity or distance between them.
The objective is to find natural groupings or clusters by using graph-theoretic techniques.

One of the popular methods within graph-based clustering is clustering with a Minimum Spanning
Tree (MST). Let's break it down step by step:

What is a Minimum Spanning Tree (MST)?

A Minimum Spanning Tree is a subgraph that connects all the nodes in a graph with the least possible
total edge weight, such that no cycles are formed. In simpler terms:

Nodes represent data points.


Edges represent relationships between the data points (with weights corresponding to the
distance or dissimilarity between points).
Spanning Tree: A tree that connects all nodes.
Minimum: The tree that minimizes the total edge weights (usually the smallest possible distances
between nodes).

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 8/15
The MST ensures that you are connecting all points with the smallest distances, making it useful for
finding natural groupings in data.

Steps for Clustering with Minimum Spanning Tree

Here's how graph-based clustering using an MST works:

1. Construct a Graph:
Treat each data point as a node in the graph.
Calculate the similarity or distance between each pair of data points and use it to define the
weight of the edge between them.
This can be done using Euclidean Distance, Cosine Similarity, or other distance metrics,
depending on the data.
2. Build the Minimum Spanning Tree (MST):
Use an algorithm like Kruskal’s Algorithm or Prim’s Algorithm to find the MST.
The MST connects all nodes (data points) with the minimum total edge weight.
3. Identify Clusters by Cutting the Tree:
Once the MST is constructed, you can "cut" the tree at edges that have the largest weight.
This cut splits the tree into smaller components (clusters).
The idea is that if you remove the edges with the highest weight, you create more tightly
connected groups (clusters).
The number of clusters can be determined by how many cuts you make or by a threshold on
the edge weight.
4. Cluster Assignment:
Each connected component after the cut represents a cluster.
All data points in the same connected component (subtree) are considered to belong to the
same cluster.

Example of Graph-Based Clustering with MST

Let’s go through a simple example with some data points.

Data Points: Imagine we have 5 points in a 2D space:

A(1, 1)
B(2, 2)
C(3, 3)
D(8, 8)
E(9, 9)

We will calculate the pairwise distances between these points (using Euclidean Distance) and build an
MST.

Step 1: Calculate Pairwise Distances

We calculate the Euclidean distance between each pair of points:

d(A, B) = (1 − 2)2 + (1 − 2)2 =​ 1+1= ​ 2 ≈ 1.41


Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 9/15
d(A, C) = (1 − 3)2 + (1 − 3)2 = ​ 4+4= ​ 8 ≈ 2.83

d(A, D) = (1 − 8)2 + (1 − 8)2 =


​ 49 + 49 = ​ 98 ≈ 9.90

... and so on for all pairs.

Step 2: Construct the Graph and MST

We treat each point as a node and create edges based on their distances.
Next, we apply Kruskal’s or Prim’s algorithm to find the MST. The MST connects the points with the
smallest distances, avoiding cycles.

MST Result:

The tree connects the closest points first: A → B → C → D → E.

Step 3: Cut the Tree to Form Clusters

The largest distance (edge weight) in the MST is between points `C` and `D` (distance ~9.90).
To form clusters, we might cut the MST at this edge, splitting the tree into two clusters:
Cluster 1: A, B, C (these points are tightly connected)
Cluster 2: D, E (these points are more distant from the others)

Step 4: Final Clusters

Cluster 1: A, B, C
Cluster 2: D, E

Now we have two clusters based on the MST and the cut at the largest edge weight.

Advantages of Clustering with MST

1. Captures Data Structure:


MST-based clustering often preserves the underlying structure of the data by focusing on the
nearest points and minimizing overall distance.
2. Works Well with Non-Spherical Clusters:
Unlike methods like k-Means, which assume circular (spherical) clusters, MST can handle
arbitrary shapes because it is based on distances, not pre-defined cluster shapes.
3. No Need for Predefined Number of Clusters:
Unlike k-Means where you need to specify the number of clusters `k` in advance, MST-based
clustering lets you determine the number of clusters by cutting the tree.

Disadvantages of Clustering with MST

1. Scalability:
As the number of data points increases, calculating the pairwise distances and constructing
the graph can become computationally expensive.
2. Sensitive to Distance Metrics:

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 10/15
The clustering result depends heavily on the distance metric used. A poor choice of distance
measure can lead to incorrect clustering.
3. Edge-Cutting Threshold:
Choosing the right place to cut the tree can be tricky. If the cut is too small, you may get too
many small clusters; if it's too large, you may not get meaningful groupings.

Python Example: Clustering with Minimum Spanning Tree

Here is a basic Python implementation using `scikit-learn` and `networkx` for graph-based clustering
with MST:

python

import numpy as np
import networkx as nx
from scipy.spatial.distance import pdist, squareform
from sklearn.cluster import DBSCAN

# Example data points (2D coordinates)


X = np.array([[1, 1], [2, 2], [3, 3], [8, 8], [9, 9]])

# Step 1: Calculate pairwise distances


distances = pdist(X, 'euclidean')
distance_matrix = squareform(distances)

# Step 2: Create a graph with nodes and edges


G = nx.Graph()
n = len(X)
for i in range(n):
for j in range(i+1, n):
G.add_edge(i, j, weight=distance_matrix[i][j])

# Step 3: Compute the MST using NetworkX's minimum spanning tree


mst = nx.minimum_spanning_tree(G)

# Step 4: Extract clusters by cutting the tree at a threshold


# (Here we use a simple approach of removing the longest edge)
edges = sorted(mst.edges(data=True), key=lambda x: x[2]['weight'], reverse=True)
threshold = 5 # Cut the MST at this edge weight
clusters = []
current_cluster = set([0]) # Start with the first point

# Group nodes into clusters based on the threshold


for edge in edges:
u, v, weight = edge
if weight['weight'] > threshold:
clusters.append(current_cluster)
current_cluster = set([u, v])
else:
current_cluster.add(u)
current_cluster.add(v)

# Final clusters
print("Clusters:", clusters)

Conclusion

Graph-based clustering using Minimum Spanning Tree (MST) is a powerful method, particularly for
capturing the underlying structure of data. By treating data points as a graph, constructing an MST, and
cutting it based on edge weights, we can find natural groupings without making assumptions about the

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 11/15
shape or number of clusters. However, the approach requires careful attention to the distance metric
and the method of cutting the tree to get meaningful results.
Would you like to dive deeper into MST algorithms, or explore another clustering technique?

Explain Model Based Clustering: Expectation Maximisation Algorithm. Explain in detail with
examples but in simple language.

Model-Based Clustering: Expectation-Maximization (EM) Algorithm

Model-based clustering assumes that data points are generated from a mixture of probability
distributions, where each distribution represents a cluster. The most common example involves
Gaussian Mixture Models (GMM), where each cluster is modeled as a Gaussian distribution.

The Expectation-Maximization (EM) Algorithm is a technique used to estimate the parameters of


these distributions and assign data points to clusters probabilistically.

Key Ideas of Model-Based Clustering

1. Probabilistic Models:
Each cluster is represented as a probability distribution (e.g., Gaussian, Bernoulli, etc.).
For simplicity, we focus on Gaussian Mixture Models (GMM), where clusters are multivariate
Gaussian distributions.
2. Cluster Membership:
Instead of hard assignments (e.g., k-Means), each data point has a probability of belonging to
each cluster.
3. EM Algorithm:
Iteratively estimates the parameters of the model (e.g., means and variances of Gaussians)
and computes the probabilities of cluster memberships.

Steps of the EM Algorithm

1. Initialize Parameters

Start by initializing the parameters of the Gaussian distributions:

Means (μk ) of each cluster.


Variances (Σk ) or covariances for multivariate data.


Mixing coefficients (πk ), which represent the proportion of points in each cluster.

For example, if we want to identify 2 clusters:

Initialize μ1 and μ2 (cluster centers).


​ ​

Assign initial variances (σ12 , σ22 ).


​ ​

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 12/15
Set π1 ​ = π2 = 0.5 (equal probability for both clusters).

2. Expectation Step (E-Step)

In this step, we compute the probability that each data point belongs to each cluster. This is done using
Bayes’ Theorem:

πk ⋅ f (xi ∣μk , Σk )
P (Z = k∣X = xi ) =
​ ​ ​ ​

K
​ ​

∑j=1 πj ⋅ f (xi ∣μj , Σj )


​ ​ ​ ​ ​

Where:

P (Z = k∣X = xi ): Probability that data point xi belongs to cluster k .


​ ​

f (xi ∣μk , Σk ): Probability density function (PDF) of the Gaussian for cluster k .
​ ​ ​

πk : Mixing coefficient for cluster k .


This step assigns "soft" cluster memberships for each point.

3. Maximization Step (M-Step)

Using the probabilities computed in the E-step, we update the parameters of the model:

1. Update Means (μk ): ​

N
∑i=1 P (Z = k∣X = xi ) ⋅ xi
μk =
​ ​ ​

∑N
​ ​

i=1 P (Z = k∣X = xi )
​ ​

This is the weighted mean of the points assigned to cluster k .

2. Update Variances (Σk ): ​

N
∑i=1 P (Z = k∣X = xi ) ⋅ (xi − μk )2
Σk =
​ ​ ​ ​

N
​ ​

∑i=1 P (Z = k∣X = xi )
​ ​

This is the weighted variance.


3. Update Mixing Coefficients (πk ): ​

N
∑ P (Z = k∣X = xi )
πk = i=1
​ ​

​ ​

N
This is the proportion of points belonging to cluster k .

4. Repeat Until Convergence

Iteratively repeat the E-step and M-step until the parameters stabilize (e.g., the change in parameters is
below a threshold).

Example

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 13/15
Let’s cluster a 1D dataset into two clusters using a Gaussian Mixture Model.
Data: [1, 2, 3, 10, 11, 12]

Step 1: Initialize Parameters

μ1 = 2, μ2 = 11 (initial means for two clusters).


​ ​

σ12 = σ22 = 1 (initial variances).


​ ​

π1 = π2 = 0.5 (equal cluster proportions).


​ ​

Step 2: E-Step

Compute probabilities for each point belonging to each cluster using the Gaussian PDF.

For x = 1:

π1 ⋅ f (1∣μ1 = 2, σ12 = 1)
P (Z = 1∣X = 1) =
​ ​ ​

π1 ⋅ f (1∣μ1 ) + π2 ⋅ f (1∣μ2 )

​ ​ ​ ​

Similarly, compute P (Z = 2∣X = 1).

Step 3: M-Step

Update the parameters (μ1 , μ2 , σ12 , π1 , π2 ) using the updated probabilities from the E-step.
​ ​ ​ ​ ​

Advantages of Model-Based Clustering

1. Flexibility:
Can model clusters of different shapes, sizes, and densities by choosing appropriate
probability distributions.
2. Probabilistic Assignments:
Handles overlapping clusters by providing soft assignments.
3. Well-Suited for Real-World Problems:
Useful in applications like customer segmentation, speech recognition, and image
processing.

Disadvantages

1. Computational Complexity:
EM can be slow for large datasets or high-dimensional data.
2. Initialization Sensitivity:
Poor initialization of parameters can lead to suboptimal results.
3. Assumes Distribution Form:
Performance depends on correctly assuming the type of distribution (e.g., Gaussian).

Python Example of EM for Gaussian Mixture Model

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 14/15
python

from sklearn.mixture import GaussianMixture


import numpy as np

# Example data
X = np.array([[1], [2], [3], [10], [11], [12]])

# Create and fit the GMM


gmm = GaussianMixture(n_components=2, random_state=42)
gmm.fit(X)

# Predict cluster memberships


clusters = gmm.predict(X)
probs = gmm.predict_proba(X)

print("Cluster Assignments:", clusters)


print("Probabilities:", probs)
print("Means:", gmm.means_)
print("Variances:", gmm.covariances_)

Conclusion

The EM algorithm for model-based clustering (e.g., GMMs) is a powerful tool for finding clusters in data,
especially when clusters overlap or have irregular shapes. It provides probabilistic memberships and is
widely used in real-world problems where data naturally fits probabilistic models.

Printed using Save ChatGPT as PDF, powered by PDFCrowd HTML to PDF API. 15/15

You might also like