Softmax Classification
classification, likelihood, softmax, one-hot vectors, zero-one loss, conditional likelihood, MLE, NLL, cross-entropy loss
We will introduce classification problems and some simple models for classification.
Classification
Classification problems are supervised machine learning problems where the task is to predict a discrete class for a given input (unlike regression where the output was numeric). A typical example is handwritten digit recognition where the input is an image of a handwritten digit, and the output is one of the discrete categories $\{0, \ldots, 9\}$. As in all supervised learning problems the training data consists of a set of example input-output pairs.
Likelihood
A natural objective in classification could be to minimize the number of misclassified examples in the training data. This number is known as the zero-one loss. However the zero-one loss has some undesirable properties for training: in particular it is discontinuous. A small change in one of the parameters either has no effect on the loss, or can turn one or more of the predictions from false to true or true to false, causing a discontinuous jump in the objective. This means the gradient of the zero-one loss with respect to the parameters is either undefined or zero, thus not helpful.
A more commonly used objective for classification is conditional likelihood: the probability of the observed data given our model and the inputs. Instead of predicting a single class for each instance, we let our model predict a probability distribution over all classes. Then we adjust the weights of the model to increase the probabilities for the correct classes and decrease it for others. This is also known as the maximum likelihood estimation (MLE).
Let $\mathcal{X}=\{x_1,\ldots,x_N\}$ be the inputs in the training data, $\mathcal{Y}=\{y_1,\ldots,y_N\}$ be the correct classes and $\theta$ be the parameters of our model. Conditional likelihood is:
The second equation assumes that the data instances were generated independently.
We usually work with log likelihood for mathematical convenience: log is a monotonically increasing function, so maximizing likelihood is the same as maximizing log likelihood:
We will typically use the negative of $\ell$ (machine learning people like to minimize), which is known as negative log likelihood (NLL), or cross-entropy loss.
Softmax
A classification model for a problem with $C$ classes typically generates $y\in\mathbb{R}^C$, a vector of $C$ scores (e.g. we might use multivariate linear regression with a vector output as seen in the last chapter). In general these scores will be arbitrary real numbers. To go from arbitrary scores $y\in\mathbb{R}^C$ to normalized probability estimates $p\in\mathbb{R}^C$ for a single instance, we use exponentiation and normalization:
where $i,c\in\{1,\ldots,C\}$ range over classes, and $p_i, y_i, y_c$ refer to class probabilities and scores for a single instance. This is called the softmax function. A model that converts the unnormalized values at the end of a linear regression to normalized probabilities for classification is called the softmax classifier.
We need to figure out the backward pass for the softmax function. In other words if someone gives us the gradient of some objective $J$ with respect to the class probabilities $p$ for a single training instance, what is the gradient with respect to the input of the softmax $y$? First we'll find the partial derivative of one component of $p$ with respect to one component of $y$:
The square brackets are the Iverson bracket notation, i.e. $[A]$ is 1 if $A$ is true, and 0 if $A$ is false.
Note that a single entry in $y$ effects $J$ through multiple paths ($y_j$ contributes to the denominator of every $p_i$), and these effects need to be added for $\partial J/\partial y_j$:
One-hot vectors
When using a probabilistic classifier, it is convenient to represent the desired output as a one-hot vector, i.e. a vector in which all entries are '0' except a single '1'. If the correct class is $c\in\{1,\ldots,C\}$, we represent this with a one-hot vector $p\in\mathbb{R}^C$ where $p_c = 1$ and $p_{i\neq c} = 0$. Note that $p$ can be viewed as a probability vector where all the probability mass is concentrated at class c. This representation also allows us to have probabilistic targets where there is not a single answer but target probabilities associated with each answer. Given a one-hot (or probabilistic) $p$, and the model prediction $\hat{p}$, we can write the log-likelihood for a single instance as:
Gradient of log likelihood
To compute the gradient for log likelihood, we need to make the normalization of $\hat{p}$ explicit:
The gradient with respect to unnormalized y scores takes a particularly simple form:
The gradient with respect to $\hat{p}$ causes numerical overflow when some components of $\hat{p}$ get very small. In practice we usually skip that and directly compute the gradient with respect to $y$ which is numerically stable.
MNIST example
Let's try our softmax classifier on the MNIST handwritten digit classification dataset. Here are the first 8 images from MNIST, the goal is to look at the pixels and classify each image as one of the digits 0-9:
Load and minibatch the data. dtrn
and dtst
consist of xy pairs [ (x1,y1), (x2,y2), ... ]
where xi,yi
are minibatches of 100 instances:
using Knet
include(Knet.dir("data","mnist.jl"))
xtrn,ytrn,xtst,ytst = mnist()
dtst = minibatch(xtst,ytst,100)
dtrn = minibatch(xtrn,ytrn,100)
Here is the softmax classifier in Knet:
predict(w,x) = w[1]*mat(x) .+ w[2] # mat converts x to 2D
loss(w,x,ygold) = nll(predict(w,x),ygold) # nll computes negative log likelihood
lossgradient = grad(loss) # grad returns gradient function
wsoft=[ 0.1*randn(10,784), zeros(10,1) ] # initial weights and bias
Here is the SGD training loop (see the full example in the Knet tutorial):
function train!(w, data; lr=.1)
for (x,y) in data
dw = lossgradient(w, x, y)
for i in 1:length(w)
w[i] -= lr * dw[i]
end
end
return w
end
Here are the plots of the negative log likelihood and misclassification error vs training epochs:
We can observe a few things. First the training losses are better than the test losses. This means there is some overfitting, i.e. the model is learning spurious regularities in the training data that do not generalize to test data. Second, it does not look like the training loss is going down to zero. This means there is also underfitting, i.e. the softmax model is not flexible enough to fit the training data exactly.
Representational power
So far we have seen how to create a machine learning model as a differentiable program (linear regression, softmax classification) whose parameters can be adjusted to hopefully imitate whatever process generated our training data. A natural question to ask is whether a particular model can behave like any system we want (given the right parameters) or whether there is a limit to what it can represent.
It turns out the softmax classifier is quite limited in its representational power: it can only represent linear classification boundaries. To show this, remember the form of the softmax classifier which gives the probability of the i'th class as:
where $y_i$ is a linear function of the input $x$. Note that $p_i$ is a monotonically increasing function of $y_i$, so for two classes $i$ and $j$, $p_i > p_j$ iff $y_i > y_j$. The boundary between two classes $i$ and $j$ is the set of inputs for which the probability of the two classes are equal:
where $w_i, b_i$ refer to the i'th row of $w$ and $b$. This is a linear equation, i.e. the border between two classes will always be linear in the input space with the softmax classifier:
In the MNIST example, the relation between the pixels and the digit classes is unlikely to be this simple. That is why we are stuck at 6-7% training error. To get better results we need more powerful models.