SVM Kernels: Understanding the Role of Kernels in Support Vector Machines

udit
3 min readJan 1, 2023

--

Source: https://towardsdatascience.com/the-kernel-trick-c98cdbcaeb3f

Support vector machines (SVMs) are a type of supervised learning algorithm that is commonly used for classification and regression tasks. SVMs work by finding a decision boundary that maximally separates the data points in different classes.

In order to classify data points, SVMs use a mathematical function called a kernel function, which maps the data points from the original feature space to a higher-dimensional space. The purpose of the kernel function is to transform the data in such a way that the decision boundary becomes easier to identify.

There are various types of kernel functions that can be used in SVMs, including linear kernels, polynomial kernels, and radial basis function (RBF) kernels. Each type of kernel has its own unique characteristics and is suitable for different types of data.

Linear kernels:

Linear kernels are the most basic type of kernel function and are used when the data is linearly separable. A linear kernel maps the data points from the original feature space to a higher-dimensional space using a linear function. The decision boundary in this case is a hyperplane, which is a subspace of one dimension less than the original space.

Example:

To illustrate how a linear kernel is used in an SVM, let’s consider an example. Suppose we are given a dataset containing two classes of points, represented by red and blue dots in the figure below. We can use an SVM with a linear kernel to find a decision boundary that maximally separates the two classes.

In this example, the decision boundary is a straight line that maximally separates the two classes of points.

Polynomial kernels:

Polynomial kernels are used when the data is not linearly separable and can be

separated by a polynomial function. A polynomial kernel maps the data points from the original feature space to a higher-dimensional space using a polynomial function of degree d. The decision boundary in this case is a polynomial curve of degree d-1.

Example:

To illustrate how a polynomial kernel is used in an SVM, let’s consider an example. Suppose we are given a dataset containing two classes of points, represented by red and blue dots in the figure below. We can use an SVM with a polynomial kernel to find a decision boundary that maximally separates the two classes.

In this example, the decision boundary is a polynomial curve that maximally separates the two classes of points.

Radial basis function (RBF) kernels:

Radial basis function (RBF) kernels are used when the data is not linearly separable and cannot be separated by a polynomial function. RBF kernels map the data points from the original feature space to a higher-dimensional space using a radial basis function, which is a function that is zero at the origin and increases with distance from the origin. The decision boundary in this case is a non-linear curve.

Example:

To illustrate how an RBF kernel is used in an SVM, let’s consider an example. Suppose we are given a dataset containing two classes of points, represented by red and blue dots in the figure below. We can use an SVM with an RBF kernel to find a decision boundary that maximally separates the two classes.

In this example, the decision boundary is a non-linear curve that maximally separates the two classes of points.

Conclusion:

SVM kernels are mathematical functions that are used to map data points from the original feature space to a higher-dimensional space in order to identify a decision boundary that maximally separates the data points in different classes. There are various types of kernels that can be used in

--

--

udit
udit

No responses yet