2/24/2016

Deep learning study - one hot encoding #4



One hot encoding.
That is the probabilistic value from softmax to change a reliable 1 or 0 values.
It has the value 1.0 for the correct class and 0 every where else.






2/23/2016

Deep learning study - logistic classifier #3

Logistic Classifier 


The logistic classifier is similar to equation of the plane.

W is weight vector, X is input vector and y is output vector.
b is bias that to adjust boundary base.

Anyway, this form is context of logistic regression, called logists.

y is result value of vector calculation that is score, that is not probability.
So, to change a probability, use softmax function.


So, we can choose what probability is close to 1.


Softmax example code in python
///
scores = [3.0, 1.0, 0.2]
import numpy as np
def softmax(x):
    """Compute softmax values for each sets of scores in x."""
    return np.exp(x) / np.sum( np.exp(x), 0 )
print(softmax(scores))

///
result is
[ 0.8360188   0.11314284  0.05083836]


Softmax plot example in python
///
# Plot softmax curves
import numpy as np
import matplotlib.pyplot as plt
x = np.arange(-2.0, 6.0, 0.1)
scores = np.vstack([x, np.ones_like(x), 0.2 * np.ones_like(x)])

plt.plot(x, scores.T, linewidth=2)
plt.show()

plt.plot(x, softmax(scores).T, linewidth=2)
plt.show()
///

first plot is

second plot is

The second graph is softmax output value.
The value of the blue line is grows, lines of green and red is almost close to zero.


one more things,
What happen if scores are multiplied or divided by 10?
///
scores2 = np.array([3.0, 1.0, 0.2])
print( softmax( scores2 * 10))
print( softmax( scores2 / 10))
///

[ 9.99999998e-01 2.06115362e-09 6.91440009e-13]
[ 0.38842275 0.31801365 0.2935636 ]

If multiplied by the growing differences
If divided, the smaller the difference.


We can take advantage of these properties.
We'll want our classifier to not be too sure of itself in the beginning. -> divided
And then over time, it will gain confidence as it learns. -> multiply







2/22/2016

Deep learning study - supervised classification #2



classification relatives to regression, reinforcement learning, ranking, detection.


deep learning study (introduction) #1

Deep learning lecture
(tensor flow based..)

part 1.

1. logistic classification

2. stochastic optimization

3. general data practices to train models( data & parameter tuning)


part 2. (we're going to go deeper)

1. Deep networks

2. Regularization (to train even bigger models)


part 3. ( will be a deep dive into image and convolutional models)

1. convolutional networks


part 4. (all about text and sequence in general)

1. embeddings

2. recurrent models