(optional) Specify a label value that should be ignored when computing the loss.
How to normalize the output loss.
Performs a back-propagation step through the criterion, with respect to the given input.
Performs a back-propagation step through the criterion, with respect to the given input.
input data
target
gradient corresponding to input data
Deep copy this criterion
Takes an input object, and computes the corresponding loss of the criterion,
compared with target
.
Takes an input object, and computes the corresponding loss of the criterion,
compared with target
.
input data
target
the loss of criterion
Read the normalization mode parameter and compute the normalizer based on the input size.
Read the normalization mode parameter and compute the normalizer based on the input size. If normalizeMode is VALID, the count of valid outputs will be read from validCount, unless it is -1 in which case all outputs are assumed to be valid.
Computing the gradient of the criterion with respect to its own input.
Computing the gradient of the criterion with respect to its own input. This is returned in gradInput. Also, the gradInput state variable is updated accordingly.
input data
target data / labels
gradient of input
compute the loss
compute the loss
input.size(1) is batch num input.size(2) is the softmaxAxis, number of classes as usual
target or labels
Computes the multinomial logistic loss for a one-of-many classification task, passing real-valued predictions through a softmax to get a probability distribution over classes. It should be preferred over separate SoftmaxLayer + MultinomialLogisticLossLayer as its gradient computation is more numerically stable.