train model
train dataset
loss function
Clean some internal states, so this or other optimizers can run optimize again
Clean some internal states, so this or other optimizers can run optimize again
This method will be called at the end of optimize. You need not call it if optimize succeed. If the optimize fails, you may call it before next optimize.
Disable gradient clipping
Get the directory of saving checkpoint
Get the directory of saving checkpoint
Trigger the optimization process
Trigger the optimization process
the model to be trained
Enable overwrite saving checkpoint
Enable overwrite saving checkpoint
a list of ParameterProcessor, orders matter
a list of ParameterProcessor, orders matter
Set a check point saved at path
triggered by trigger
Set a check point saved at path
triggered by trigger
the directory to save
how often to save the check point
the optimizer
Set constant gradient clipping
Set constant gradient clipping
the minimum value to clip by
the maximum value to clip by
Set a new criterion to the optimizer
Set dropping a certain percentage (dropPercentage
) of models during distributed
training to accelerate, because some cached model may take too long.
Set dropping a certain percentage (dropPercentage
) of models during distributed
training to accelerate, because some cached model may take too long.
drop percentage
max drop percentage
batch size
how may iteration to warm up
this optimizer
When to stop, passed in a Trigger
Clip gradient to a maximum L2-norm
Clip gradient to a maximum L2-norm
gradient L2-Norm threshold
Set a model to the optimizer.
Set a model to the optimizer. Notice: if current optimMethod in this optimizer is not a global optimMethod, this setModel will throw an exception. You should use setModelAndOptimMethods instead.
new model
Set new model and new optimMethods to the optimizer.
Set new model and new optimMethods to the optimizer.
new model
new optimMethods
Set an optimization method
Set optimization methods for each submodule.
Set optimization methods for each submodule.
A mapping of submodule -> OptimMethod
Set a state(learning rate, epochs.
Set a state(learning rate, epochs...) to the optimizer
the state to be saved
Set new train dataset.
Set new train dataset.
training Samples
mini batch size
feature padding strategy, see com.intel.analytics.bigdl.dataset.PaddingParam for details.
label padding strategy, see com.intel.analytics.bigdl.dataset.PaddingParam for details.
the optimizer
Set new train dataset.
Set new train dataset. User can supply a customized implementation of trait MiniBatch to define how data is organized and retrieved in a mini batch.
training Samples
mini batch size
the Optimizer
Enable train summary.
Enable train summary.
Set validate evaluation
Set validate evaluation
how often to evaluation validation set
validate data set in type of RDD of Sample
a set of validation method ValidationMethod
batch size
construct MiniBatch with a specified miniBatch type
Set a validate evaluation
Set a validate evaluation
how often to evaluation validation set
validate data set in type of RDD of Sample
a set of validation method ValidationMethod
batch size
this optimizer
Set a validate evaluation
Set a validate evaluation
how often to evaluation validation set
validate data set in type of RDD of Sample
a set of validation method ValidationMethod
batch size
feature padding strategy, see com.intel.analytics.bigdl.dataset.PaddingParam for details.
label padding strategy, see com.intel.analytics.bigdl.dataset.PaddingParam for details.
this optimizer
Set a validate evaluation
Set a validate evaluation
how often to evaluation validation set
validate data set in type of DataSet of MiniBatch
a set of validation method ValidationMethod
this optimizer
Enable validation summary.
Enable validation summary.
make optimizer not check the singleton model on a node
make optimizer not check the singleton model on a node
(Since version 0.1.0) Use bigdl.check.singleton instead
The optimizer run on a distributed cluster.