Machine Learning A-Z: Part 3 – Classification (Kernel SVM Intuitioin)

Kernel SVM Intuitioin

Data Type:
– Linearly Separable
– Not Linearly Separable => Kernel SVM

A Higher-Dimensional Space

Mapping to a higher dimension.

[1D Space] (x1)
f = x -5

[2D Space] (x1, x2)
f = (x -5)^2

[3D Space] (x1, x2, z)
=> can be highly compute-intensive.

The Gaussian RBF Kernel

Types of Kernel Functions

– Gaussian RBF Kernel
K(x,l→i) = e -(||x-l→i||2) / 2σ2

– Sigmoid Kernel
K(X,Y) = tanh(γ・XTY + r)

– Polynomial Kernel
K(X,Y) = (γ・XTY + r)d,γ>0

http://mlkernels.readthedocs.io/en/latest/kernels.html
(http://mlkernels.readthedocs.io/en/latest/kernelfunctions.html)

Implementation

Python

R

Machine Learning A-Z: Part 3 – Classification (SVM Intuitioin)

Support Vector Machine (SVM)

classification: how to classify added new data point.

Maximum Margin: a margin which has maximized space between support vectors.
Support Vectors: vectors which decides the maximum margin.

Maximum Margin Hyperplane (Maximum Margin Classifier)
Positive Hyperplane
Negative Hyperplane

Implementation

Python

R

Machine Learning A-Z: Part 3 – Classification (K-Nearest Neighbors)

K-NN Intuition

How do we classify a new data point between category 1 and 2?
K-NN identifies which category the new data point should be in.

STEP 1: Choose the number K of neighbors

STEP 2: Take the K nearest neighbors of the new data point, according to the Euclidean distance

STEP 3: Among these K neighbors, count the number of data points in each category

STEP 4: Assign the new data point to the category where you counted the most neighbors

Euclidean Distance

2 points:
P1(x1, y1)
P2(x2, y2)

Euclidean Distance between P1 and P2 = √((x2 – x1)2 + (y2 – y1)2)

Implementation

Python

R

Machine Learning A-Z: Part 3 – Classification (Logistic Regression)

Linear Regression

– Simple:
y = b0 + b1 * x1

– Multiple:
y = b0 + b1 * x1 + … + bn * xn

Logistic Regression

Sigmoid Function:
p = 1 / (1 + e-y)

ln * (p / (1 – p)) = b0 + b1 * x

y: Actual DV [dependent variable]
p^: Probability [p_hat]
y^: Predicted DV

Implementation

Python

R

Templates

Python

R

Machine Learning A-Z: Part 2 – Regression (Evaluating Regression Models Performance)

R Squared Intuition

Simple Linear Regression

R Squared

SUM (yi – yi^)2 -> min

SSres = SUM (yi – yi^)2
res = residual

SStot = SUM (yi – yavg)2
tot = total

R2 = 1 – SSres / SStot

Adjusted R2 (R Squared)

R2 = 1 – SSres / SStot

y = b0 + b1 * x1

y = b0 + b1 * x1 + b2 * x2

SSres -> Min

R2 – Goodness of fit (greater is better)

Problem:
y = b0 + b1 * x1 + b2 * x2 (+ b3 * x3)

SSres -> Min

R2 will never decrease

R2 = 1 – SSres / SStot

Adj R2 = 1 – (1 – R2) * (n-1) / (n- p – 1)
p – number of regressors
n – sample size

1. Pros and cons of each regression model

https://www.superdatascience.com/wp-content/uploads/2017/02/Regression-Pros-Cons.pdf

2. How do I know which model to choose for my problem ?

1) Figure out whether your problem is linear or non linear.

– linear:
– only one feature: Simple Linear Regression
– several features: Multiple Linear Regression

– non linear:
– Polynomial Regression
– SVR
– Decision Tree
– Random Forest

3. How can I improve each of these models ?

=> In Part 10 – Model Selection

a. The parameters that are learnt, for example the coefficients in Linear Regression.

b. The hyperparameters.
– not learnt
– fixed values inside the model equations.

https://www.superdatascience.com/wp-content/uploads/2017/02/Regularization.pdf

ページトップへ