Performs a back-propagation step through the criterion, with respect to the given input.
Performs a back-propagation step through the criterion, with respect to the given input.
input data
target
gradient corresponding to input data
Deep copy this criterion
Takes an input object, and computes the corresponding loss of the criterion,
compared with target
.
Takes an input object, and computes the corresponding loss of the criterion,
compared with target
.
input data
target
the loss of criterion
Computing the gradient of the criterion with respect to its own input.
Computing the gradient of the criterion with respect to its own input. This is returned in gradInput. Also, the gradInput state variable is updated accordingly.
input data
target data / labels
gradient of input
Computes the loss using input and objective function.
Computes the loss using input and objective function. This function returns the result which is stored in the output field.
input of the criterion
target or labels
the loss of the criterion
Creates a criterion that measures the loss given an input x which is a 1-dimensional vector and a label y (1 or -1). This is usually used for measuring whether two inputs are similar or dissimilar, e.g. using the L1 pairwise distance, and is typically used for learning nonlinear embeddings or semi-supervised learning.
⎧ x_i, if y_i == 1 loss(x, y) = 1/n ⎨ ⎩ max(0, margin - x_i), if y_i == -1
If x and y are n-dimensional Tensors, the sum operation still operates over all the elements, and divides by n (this can be avoided if one sets the internal variable sizeAverage to false). The margin has a default value of 1, or can be set in the constructor.