0% found this document useful (0 votes)
18 views

Support Vector Machine (SVM Classifier) Implemenation in Python With Scikit-Learn

Uploaded by

R Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views

Support Vector Machine (SVM Classifier) Implemenation in Python With Scikit-Learn

Uploaded by

R Kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Support Vector Machine (Svm

Classifier) Implemenation In Python


With Scikit-Learn
January 25, 2017 Saimadhu Polamuri
5 Comments

Machine Learning, python

Iris Classification with Svm Classifier

Svm classifier implementation in python with


scikit-learn
Support vector machine classifier is one of the most popular machine
learning classification algorithm. Svm classifier mostly used in addressing multi-
classification problems. If you are not aware of the multi-classification problem
below are examples of multi-classification problems.

Multi-Classification Problem Examples:

 Given fruit features like color, size, taste, weight, shape. Predicting the fruit
type.
 By analyzing the skin, predicting the different skin disease.
 Given Google news articles, predicting the topic of the article. This could be
sport, movie, tech news related article, etc.

In short: Multi-classification problem means having more that 2 target classes to


predict.

In the first example of predicting the fruit type. The target class will have many
fruits like apple, mango, orange, banana, etc. This is same with the other two
examples in predicting. The problem of the new article, the target class having
different topics like sport, movie, tech news ..etc

In this article, we were going to implement the svm classifier with different kernels.
However, we have explained the key aspect of support vector machine algorithm as
well we had implemented svm classifier in R programming language in our earlier
posts. If you are reading this post for the first time, it’s recommended to chek out
the previous post on svm concepts.

To implement svm classifier in Python, we are going to use the one of most popular
classification dataset which is Iris dataset. Let’s quickly look at the features and the
target variable details of the famous classification dataset.

Iris Dataset description


Irises dataset for classification

This famous classification dataset first time used in Fisher’s classic 1936 paper, The
Use of Multiple Measurements in Taxonomic Problems. Iris dataset is having 4
features of iris flower and one target class.

The 4 features are

 SepalLengthCm
 SepalWidthCm
 PetalLengthCm
 PetalWidthCm

The target class

The flower species type is the target class and it having 3 types

 Setosa
 Versicolor
 Virginica

The idea of implementing svm classifier in Python is to use the iris features to train
an svm classifier and use the trained svm model to predict the Iris species type. To
begin with let’s try to load the Iris dataset. We are going to use the iris data
from Scikit-Learn package.

Analyzing Iris dataset


To successfully run the below scripts in your machine you need to install the
required packages. It’s better to please go through the python machine learning
packages installation or machine learning packages step up before running the
below scripts.

Importing Iris dataset from Scikit-Learn

Let’s first import the required python packages

1 # Required Packages
2 from sklearn import datasets # To Get iris dataset
3 from sklearn import svm # To fit the svm classifier
4 import numpy as np
5 import matplotlib.pyplot as plt # To visuvalizing the data

Now let’s import the iris dataset

1 # import iris data to model Svm classifier


2 iris_dataset = datasets.load_iris()

Using the DESCR key over the iris_dataset, we can get description of the dataset

1 print "Iris data set Description :: ", iris_dataset['DESCR']

Output
1 Iris data set Description :: Iris Plants Database
2
3 Notes
4 -----
5 Data Set Characteristics:
6 :Number of Instances: 150 (50 in each of three classes)
7 :Number of Attributes: 4 numeric, predictive attributes and the class
8 :Attribute Information:
9 - sepal length in cm
10 - sepal width in cm
11 - petal length in cm
12 - petal width in cm
13 - class:
14 - Iris-Setosa
15 - Iris-Versicolour
16 - Iris-Virginica
17 :Summary Statistics:
18
19 ============== ==== ==== ======= ===== ====================
20 Min Max Mean SD Class Correlation
21 ============== ==== ==== ======= ===== ====================
22 sepal length: 4.3 7.9 5.84 0.83 0.7826
23 sepal width: 2.0 4.4 3.05 0.43 -0.4194
24 petal length: 1.0 6.9 3.76 1.76 0.9490 (high!)
25 petal width: 0.1 2.5 1.20 0.76 0.9565 (high!)
26 ============== ==== ==== ======= ===== ====================
27
28 :Missing Attribute Values: None
29 :Class Distribution: 33.3% for each of 3 classes.
30 :Creator: R.A. Fisher
31 :Donor: Michael Marshall (MARSHALL%[email protected])
32 :Date: July, 1988
33
34 This is a copy of UCI ML iris datasets.
35 http://archive.ics.uci.edu/ml/datasets/Iris
36
37 The famous Iris database, first used by Sir R.A Fisher
38
39 This is perhaps the best known database to be found in the
40 pattern recognition literature. Fisher's paper is a classic in the field and
41 is referenced frequently to this day. (See Duda & Hart, for example.) The
42 data set contains 3 classes of 50 instances each, where each class refers to a
43 type of iris plant. One class is linearly separable from the other 2; the
44 latter are NOT linearly separable from each other.
45
46 References
47 ----------
48 - Fisher,R.A. "The use of multiple measurements in taxonomic problems"
49 Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
50 Mathematical Statistics" (John Wiley, NY, 1950).
51 - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
52 (Q327.D83) John Wiley & Sons. ISBN 0-471-22361-1. See page 218.
53 - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
54 Structure and Classification Rule for Recognition in Partially Exposed
55 Environments". IEEE Transactions on Pattern Analysis and Machine
56 Intelligence, Vol. PAMI-2, No. 1, 67-71.
57 - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule". IEEE Transactions
58 on Information Theory, May 1972, 431-433.
59 - See also: 1988 MLC Proceedings, 54-64. Cheeseman et al"s AUTOCLASS II
60 conceptual clustering system finds 3 classes in the data.
61 - Many, many more ...

Now let’s get the iris features and the target classes

1 print "Iris feature data :: ", iris_dataset['data']

Output

1 Iris feature data :: [[ 5.1 3.5 1.4 0.2]


2 [ 4.9 3. 1.4 0.2]
3 [ 4.7 3.2 1.3 0.2]
4 [ 4.6 3.1 1.5 0.2]
5 [ 5. 3.6 1.4 0.2]
6 [ 5.4 3.9 1.7 0.4]
7 [ 4.6 3.4 1.4 0.3]
8 [ 5. 3.4 1.5 0.2]
9 [ 4.4 2.9 1.4 0.2]
10 [ 4.9 3.1 1.5 0.1]
11 [ 5.4 3.7 1.5 0.2]
12 [ 4.8 3.4 1.6 0.2]
13 [ 4.8 3. 1.4 0.1]
14 [ 4.3 3. 1.1 0.1]
15 [ 5.8 4. 1.2 0.2]
16 [ 5.7 4.4 1.5 0.4]
17 [ 5.4 3.9 1.3 0.4]
18 [ 5.1 3.5 1.4 0.3]
19 [ 5.7 3.8 1.7 0.3]
20 [ 5.1 3.8 1.5 0.3]
21 [ 5.4 3.4 1.7 0.2]
22 [ 5.1 3.7 1.5 0.4]
23 [ 4.6 3.6 1. 0.2]
24 [ 5.1 3.3 1.7 0.5]
25 [ 4.8 3.4 1.9 0.2]
26 [ 5. 3. 1.6 0.2]
27 [ 5. 3.4 1.6 0.4]
28 [ 5.2 3.5 1.5 0.2]
29 [ 5.2 3.4 1.4 0.2]
30 [ 4.7 3.2 1.6 0.2]
31 [ 4.8 3.1 1.6 0.2]
32 [ 5.4 3.4 1.5 0.4]
33 [ 5.2 4.1 1.5 0.1]
34 [ 5.5 4.2 1.4 0.2]
35 [ 4.9 3.1 1.5 0.1]
36 [ 5. 3.2 1.2 0.2]
37 [ 5.5 3.5 1.3 0.2]
38 [ 4.9 3.1 1.5 0.1]
39 [ 4.4 3. 1.3 0.2]
40 [ 5.1 3.4 1.5 0.2]
41 [ 5. 3.5 1.3 0.3]
42 [ 4.5 2.3 1.3 0.3]
43 [ 4.4 3.2 1.3 0.2]
44 [ 5. 3.5 1.6 0.6]
45 [ 5.1 3.8 1.9 0.4]
46 [ 4.8 3. 1.4 0.3]
47 [ 5.1 3.8 1.6 0.2]
48 [ 4.6 3.2 1.4 0.2]
49 [ 5.3 3.7 1.5 0.2]
50 [ 5. 3.3 1.4 0.2]
51 [ 7. 3.2 4.7 1.4]
52 [ 6.4 3.2 4.5 1.5]
53 [ 6.9 3.1 4.9 1.5]
54 [ 5.5 2.3 4. 1.3]
55 [ 6.5 2.8 4.6 1.5]
56 [ 5.7 2.8 4.5 1.3]
57 [ 6.3 3.3 4.7 1.6]
58 [ 4.9 2.4 3.3 1. ]
59 [ 6.6 2.9 4.6 1.3]
60 [ 5.2 2.7 3.9 1.4]
61 [ 5. 2. 3.5 1. ]
62 [ 5.9 3. 4.2 1.5]
63 [ 6. 2.2 4. 1. ]
64 [ 6.1 2.9 4.7 1.4]
65 [ 5.6 2.9 3.6 1.3]
66 [ 6.7 3.1 4.4 1.4]
67 [ 5.6 3. 4.5 1.5]
68 [ 5.8 2.7 4.1 1. ]
69 [ 6.2 2.2 4.5 1.5]
70 [ 5.6 2.5 3.9 1.1]
71 [ 5.9 3.2 4.8 1.8]
72 [ 6.1 2.8 4. 1.3]
73 [ 6.3 2.5 4.9 1.5]
74 [ 6.1 2.8 4.7 1.2]
75 [ 6.4 2.9 4.3 1.3]
76 [ 6.6 3. 4.4 1.4]
77 [ 6.8 2.8 4.8 1.4]
78 [ 6.7 3. 5. 1.7]
79 [ 6. 2.9 4.5 1.5]
80 [ 5.7 2.6 3.5 1. ]
81 [ 5.5 2.4 3.8 1.1]
82 [ 5.5 2.4 3.7 1. ]
83 [ 5.8 2.7 3.9 1.2]
84 [ 6. 2.7 5.1 1.6]
85 [ 5.4 3. 4.5 1.5]
86 [ 6. 3.4 4.5 1.6]
87 [ 6.7 3.1 4.7 1.5]
88 [ 6.3 2.3 4.4 1.3]
89 [ 5.6 3. 4.1 1.3]
90 [ 5.5 2.5 4. 1.3]
91 [ 5.5 2.6 4.4 1.2]
92 [ 6.1 3. 4.6 1.4]
93 [ 5.8 2.6 4. 1.2]
94 [ 5. 2.3 3.3 1. ]
95 [ 5.6 2.7 4.2 1.3]
96 [ 5.7 3. 4.2 1.2]
97 [ 5.7 2.9 4.2 1.3]
98 [ 6.2 2.9 4.3 1.3]
99 [ 5.1 2.5 3. 1.1]
100 [ 5.7 2.8 4.1 1.3]
101 [ 6.3 3.3 6. 2.5]
102 [ 5.8 2.7 5.1 1.9]
103 [ 7.1 3. 5.9 2.1]
104 [ 6.3 2.9 5.6 1.8]
105 [ 6.5 3. 5.8 2.2]
106 [ 7.6 3. 6.6 2.1]
107 [ 4.9 2.5 4.5 1.7]
108 [ 7.3 2.9 6.3 1.8]
109 [ 6.7 2.5 5.8 1.8]
110 [ 7.2 3.6 6.1 2.5]
111 [ 6.5 3.2 5.1 2. ]
112 [ 6.4 2.7 5.3 1.9]
113 [ 6.8 3. 5.5 2.1]
114 [ 5.7 2.5 5. 2. ]
115 [ 5.8 2.8 5.1 2.4]
116 [ 6.4 3.2 5.3 2.3]
117 [ 6.5 3. 5.5 1.8]
118 [ 7.7 3.8 6.7 2.2]
119 [ 7.7 2.6 6.9 2.3]
120 [ 6. 2.2 5. 1.5]
121 [ 6.9 3.2 5.7 2.3]
122 [ 5.6 2.8 4.9 2. ]
123 [ 7.7 2.8 6.7 2. ]
124 [ 6.3 2.7 4.9 1.8]
125 [ 6.7 3.3 5.7 2.1]
126 [ 7.2 3.2 6. 1.8]
127 [ 6.2 2.8 4.8 1.8]
128 [ 6.1 3. 4.9 1.8]
129 [ 6.4 2.8 5.6 2.1]
130 [ 7.2 3. 5.8 1.6]
131 [ 7.4 2.8 6.1 1.9]
132 [ 7.9 3.8 6.4 2. ]
133 [ 6.4 2.8 5.6 2.2]
134 [ 6.3 2.8 5.1 1.5]
135 [ 6.1 2.6 5.6 1.4]
136 [ 7.7 3. 6.1 2.3]
137 [ 6.3 3.4 5.6 2.4]
138 [ 6.4 3.1 5.5 1.8]
139 [ 6. 3. 4.8 1.8]
140 [ 6.9 3.1 5.4 2.1]
141 [ 6.7 3.1 5.6 2.4]
142 [ 6.9 3.1 5.1 2.3]
143 [ 5.8 2.7 5.1 1.9]
144 [ 6.8 3.2 5.9 2.3]
145 [ 6.7 3.3 5.7 2.5]
146 [ 6.7 3. 5.2 2.3]
147 [ 6.3 2.5 5. 1.9]
148 [ 6.5 3. 5.2 2. ]
149 [ 6.2 3.4 5.4 2.3]
150 [ 5.9 3. 5.1 1.8]]

As we are said, these are 4 features first 2 were sepal length, sepal width and the
next 2 were petal length and width. Now let’s check the target data
1 print "Iris target :: ", iris_dataset['target']

Output

1 Iris target :: [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0000000000000111111111111111111111111
3 1111111111111111111111111122222222222
4 2222222222222222222222222222222222222
5 2 2]

Visualizing the Iris dataset

Let’s take the individual features like sepal, petal length, and weight and let’s
visualize the corresponding target classes with different colors.

Visualizing the relationship between sepal and target classes

1 def visuvalize_sepal_data():
2 iris = datasets.load_iris()
3 X = iris.data[:, :2] # we only take the first two features.
4 y = iris.target
5 plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
6 plt.xlabel('Sepal length')
7 plt.ylabel('Sepal width')
8 plt.title('Sepal Width & Length')
9 plt.show()
10
11 visuvalize_sepal_data()

To visualize the Sepal length, width and corresponding target classes we can create
a function with name visuvalize_sepal_data. At the beginning, we are loading the
iris dataset to iris variable. Next, we are storing the first 2 features in iris dataset
which are sepal length and sepal width to variable x. Then we are storing the
corresponding target values in variable y.

As we have seen target variable contains values like 0, 1,2 each value represents
the iris flower species type. Then we are plotting the points on XY axis on X-axis we
are plotting Sepal Length values. On Y-axis we are plotting Sepal Width values. If
you follow installing instruction correctly on Installing Python machine learning
packages and run the above code, you will get the below image.

Iris Sepal length & width Vs Iris Species type

Let’s create the similar kind of graph for Petal length and width
Visualizing the relationship between Petal and target classes

1 def visuvalize_petal_data():
2 iris = datasets.load_iris()
3 X = iris.data[:, 2:] # we only take the last two features.
4 y = iris.target
5 plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
6 plt.xlabel('Petal length')
7 plt.ylabel('Petal width')
8 plt.title('Petal Width & Length')
9 plt.show()
10
11 visuvalize_petal_data()

If we run the above code, we will get the below graph.


Iris Petal length & width Vs Species Type

As we have successfully visualized the behavior of target class (iris species type)
with respect to Sepal length and width as well as with respect to Petal length and
width. Now let’s model different kernel Svm classifier by considering only the Sepal
features (Length and Width) and only the Petal features (Lenght and Width)

Modeling Different Kernel Svm classifier using Iris Sepal features

1 iris = datasets.load_iris()
2 X = iris.data[:, :2] # we only take the Sepal two features.
3 y = iris.target
4 C = 1.0 # SVM regularization parameter
5
6 # SVC with linear kernel
7 svc = svm.SVC(kernel='linear', C=C).fit(X, y)
8 # LinearSVC (linear kernel)
9 lin_svc = svm.LinearSVC(C=C).fit(X, y)
10 # SVC with RBF kernel
11 rbf_svc = svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(X, y)
12 # SVC with polynomial (degree 3) kernel
13 poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X, y)

To model different kernel svm classifier using the iris Sepal features, first, we
loaded the iris dataset into irisvariable like as we have done before. Next, we are
loading the sepal length and width values into X variable, and the target values
are stored in y variable. Once we are ready with data to model the svm classifier,
we are just calling the scikit-learn svm module function with different kernels.

Now let’s visualize the each kernel svm classifier to understand how well the
classifier fit the train features.

Visualizing the modeled svm classifiers with Iris Sepal features

1 h = .02 # step size in the mesh


2
3 # create a mesh to plot in
4 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
5 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
6 xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
7 np.arange(y_min, y_max, h))
8 # title for the plots
9 titles = ['SVC with linear kernel',
10 'LinearSVC (linear kernel)',
11 'SVC with RBF kernel',
12 'SVC with polynomial (degree 3) kernel']
13
14
15 for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)):
16 # Plot the decision boundary. For that, we will assign a color to each
17 # point in the mesh [x_min, x_max]x[y_min, y_max].
18 plt.subplot(2, 2, i + 1)
19 plt.subplots_adjust(wspace=0.4, hspace=0.4)
20
21 Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
22
23 # Put the result into a color plot
24 Z = Z.reshape(xx.shape)
25 plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
26
27 # Plot also the training points
28 plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
29 plt.xlabel('Sepal length')
30 plt.ylabel('Sepal width')
31 plt.xlim(xx.min(), xx.max())
32 plt.ylim(yy.min(), yy.max())
33 plt.xticks(())
34 plt.yticks(())
35 plt.title(titles[i])
36
37 plt.show()

If we run the above code, we will get the below graph. From which we can
understand how well different kernel svm classifiers are modeled.

Svm Classifier with Iris Sepal features


From the above graphs, you can clearly understand how different kernel modeled
with the same svm classifier. Now let’s model the svm classifier with Petal
features using the same kernel we have used for modeling with Sepal features.

Modeling Different Kernel Svm classifier using Iris Petal features

1 iris = datasets.load_iris()
2 X = iris.data[:, 2:] # we only take the last two features.
3 y = iris.target
4 C = 1.0 # SVM regularization parameter
5
6 # SVC with linear kernel
7 svc = svm.SVC(kernel='linear', C=C).fit(X, y)
8 # LinearSVC (linear kernel)
9 lin_svc = svm.LinearSVC(C=C).fit(X, y)
10 # SVC with RBF kernel
11 rbf_svc = svm.SVC(kernel='rbf', gamma=0.7, C=C).fit(X, y)
12 # SVC with polynomial (degree 3) kernel
13 poly_svc = svm.SVC(kernel='poly', degree=3, C=C).fit(X, y)

The above code is much similar to the previously modeled svm classifiers code. The
only difference is loading the Petal features into X variable. The remaining code is
just the copy past from the previously modeled svm classifier code.

Now let’s visualize the each kernel svm classifier to understand how well the
classifier fit the Petal features.

Visualizing the modeled svm classifiers with Iris Petal features

1 h = .02 # step size in the mesh


2 # create a mesh to plot in
3 x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
4 y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
5 xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
6 np.arange(y_min, y_max, h))
7 # title for the plots
8 titles = ['SVC with linear kernel',
9 'LinearSVC (linear kernel)',
10 'SVC with RBF kernel',
11 'SVC with polynomial (degree 3) kernel']
12
13
14 for i, clf in enumerate((svc, lin_svc, rbf_svc, poly_svc)):
15 # Plot the decision boundary. For that, we will assign a color to each
16 # point in the mesh [x_min, x_max]x[y_min, y_max].
17 plt.subplot(2, 2, i + 1)
18 plt.subplots_adjust(wspace=0.4, hspace=0.4)
19
20 Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
21
22 # Put the result into a color plot
23 Z = Z.reshape(xx.shape)
24 plt.contourf(xx, yy, Z, cmap=plt.cm.coolwarm, alpha=0.8)
25
26 # Plot also the training points
27 plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm)
28 plt.xlabel('Petal length')
29 plt.ylabel('Petal width')
30 plt.xlim(xx.min(), xx.max())
31 plt.ylim(yy.min(), yy.max())
32 plt.xticks(())
33 plt.yticks(())
34 plt.title(titles[i])
35
36 plt.show()

If we run the above code, we will get the below graph. From which we can
understand how well different kernel svm classifiers are modeled.
Svm Classifier with Iris Petal features

This is how the modeled svm classifier looks like when we only use the petal width
and length to model. With this, we came to an end. Before put an end to the post
lets quickly look how to use the modeled svm classifier to predict iris flow
categories.

Predicting iris flower category

To Identify the iris flow type using the modeled svm classifier, we need to call the
predict function over the fitted model. For example, if you want to predict the iris
flower category using the lin_svc model. We need to call lin_svc.predict(with the
features). In our case, these features will include the sepal length and width or
petal length and width. If you are not clear with the using the predict function
correctly you check knn classifier with scikit-learn.

Conclusion

In this article, we learned how to model the support vector machine classifier using
different, kernel with Python scikit-learn package. In the process, we have learned
how to visualize the data points and how to visualize the modeled svm classifier for
understanding the how well the fitted modeled were fit with the training dataset.

Related Articles

 Introduction to support vector machine classifier


 support vector machine classifier implementation in R programming
language

Related

Svm classifier, Introduction to support vector machine algorithm


January 13, 2017
In "Data Science"

visualize decision tree in python with graphviz


April 21, 2017
In "Machine Learning"

classification and clustering algorithms


September 24, 2016
In "Data Science"

 classification algorithms

 Svm

Previous Post
Support Vector Machine Classifier Implementation in R with caret package
Data Science
Next Post
How Decision Tree Algorithm works
Data Science
Search

AWARDED TOP 75 DATA SCIENCE BLOG

Dataaspirant awarded top 75 data science blog

FOLLOW AUTHOR


BUILD YOUR CAREER IN AI WITH ANDREW NG DEEP LEARNING COURSES

Andrew ng Deep learning courses

MOST POPULAR POSTS

 Building Decision Tree Algorithm in Python with scikit learn

 How Decision Tree Algorithm works

 How the random forest algorithm works in machine learning

 Difference Between Softmax Function and Sigmoid Function

 Five most popular similarity measures implementation in python

 Building Random Forest Classifier with Python Scikit learn

 visualize decision tree in python with graphviz

 Supervised and Unsupervised learning

 Support Vector Machine Classifier Implementation in R with caret package

 How the Naive Bayes Classifier works in Machine Learning

You might also like