The logistic classifier is similar to equation of the plane.
W is weight vector, X is input vector and y is output vector.
b is bias that to adjust boundary base.
Anyway, this form is context of logistic regression, called logists.
y is result value of vector calculation that is score, that is not probability.
So, to change a probability, use softmax function.
So, we can choose what probability is close to 1.
Softmax example code in python///
scores = [3.0, 1.0, 0.2] import numpy as np def softmax(x): """Compute softmax values for each sets of scores in x.""" return np.exp(x) / np.sum( np.exp(x), 0 ) print(softmax(scores))///
[ 0.8360188 0.11314284 0.05083836]
Softmax plot example in python
# Plot softmax curves import numpy as np import matplotlib.pyplot as plt x = np.arange(-2.0, 6.0, 0.1) scores = np.vstack([x, np.ones_like(x), 0.2 * np.ones_like(x)]) plt.plot(x, scores.T, linewidth=2) plt.show() plt.plot(x, softmax(scores).T, linewidth=2) plt.show()///
first plot is
second plot is
The value of the blue line is grows, lines of green and red is almost close to zero.
one more things,
What happen if scores are multiplied or divided by 10?
scores2 = np.array([3.0, 1.0, 0.2]) print( softmax( scores2 * 10)) print( softmax( scores2 / 10))///
[ 9.99999998e-01 2.06115362e-09 6.91440009e-13]
[ 0.38842275 0.31801365 0.2935636 ]
If multiplied by the growing differences
If divided, the smaller the difference.
We can take advantage of these properties.
We'll want our classifier to not be too sure of itself in the beginning. -> divided
And then over time, it will gain confidence as it learns. -> multiply