Performs a back-propagation step through the criterion, with respect to the given input.
Performs a back-propagation step through the criterion, with respect to the given input.
input data
target
gradient corresponding to input data
Deep copy this criterion
Takes an input object, and computes the corresponding loss of the criterion,
compared with target
.
Takes an input object, and computes the corresponding loss of the criterion,
compared with target
.
input data
target
the loss of criterion
Computing the gradient of the criterion with respect to its own input.
Computing the gradient of the criterion with respect to its own input. This is returned in gradInput. Also, the gradInput state variable is updated accordingly.
input data
target data / labels
gradient of input
Computes the loss using input and objective function.
Computes the loss using input and objective function. This function returns the result which is stored in the output field.
input of the criterion
target or labels
the loss of the criterion
This method is same as
mean_squared_logarithmic_error
loss in keras. It calculates: first_log = K.log(K.clip(y, K.epsilon(), Double.MaxValue) + 1.) second_log = K.log(K.clip(x, K.epsilon(), Double.MaxValue) + 1.) and output K.mean(K.square(first_log - second_log)) Here, the x and y can have or not have a batch.The numeric type in the criterion, usually which are Float or Double