Wednesday, August 13, 2025

Q4: How can SVM handle non-linear boundaries?

To See All Interview Preparation Articles: Index For Interviews Preparation
Other ques from Ch.1 from the 'Hundred-Page Machine Learning Book'

How can SVM handle non-linear boundaries?

SVM can handle non-linear boundaries using the kernel trick.

Here’s how it works:


1️⃣ The Problem

  • In the original feature space, the data might not be linearly separable.

  • A straight line (or hyperplane in higher dimensions) can’t divide the classes without misclassifications.


2️⃣ The Idea: Map Data to a Higher-Dimensional Space

  • SVM applies a feature transformation ϕ(x)\phi(\mathbf{x}) that maps the original data into a higher-dimensional space.

  • In this new space, the data may become linearly separable.

Example:

  • In 2D, circles and spirals can be hard to separate with a line.

  • If we map the data to 3D (adding a new dimension like x12+x22x_1^2 + x_2^2), a plane might separate them perfectly.


3️⃣ The Kernel Trick

  • Instead of computing ϕ(x)\phi(\mathbf{x}) explicitly (which can be computationally expensive), SVM uses a kernel function K(xi,xj)K(\mathbf{x}_i, \mathbf{x}_j) that computes the inner product in the transformed space directly.

  • This avoids the need to calculate the high-dimensional mapping explicitly.


4️⃣ Common Kernels

  • Polynomial Kernel: K(x,z)=(xz+c)dK(x, z) = (x \cdot z + c)^d

  • Radial Basis Function (RBF): K(x,z)=exp(γxz2)K(x, z) = \exp(-\gamma ||x - z||^2)

  • Sigmoid Kernel: K(x,z)=tanh(αxz+c)K(x, z) = \tanh(\alpha x \cdot z + c)


5️⃣ Intuition

Think of kernels as a way to add non-linear “features” automatically, allowing SVM to find complex boundaries in the original space while still solving a linear problem in a hidden space.


If you want, I can create a diagram showing how kernel mapping turns a non-linear 2D problem into a linearly separable one in 3D. That would make this crystal clear.

Tags: Machine Learning,Interview Preparation,Technology

No comments:

Post a Comment