quantity to be monitored, can be Loss or score
factor by which the learning rate will be reduced. new_lr = lr * factor
number of epochs with no improvement after which learning rate will be reduced.
one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing
threshold for measuring the new optimum, to only focus on significant changes.
number of epochs to wait before resuming normal operation after lr has been reduced.
lower bound on the learning rate.
number of epochs to wait before resuming normal operation after lr has been reduced.
threshold for measuring the new optimum, to only focus on significant changes.
factor by which the learning rate will be reduced.
factor by which the learning rate will be reduced. new_lr = lr * factor
lower bound on the learning rate.
one of {min, max}.
one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing
quantity to be monitored, can be Loss or score
number of epochs with no improvement after which learning rate will be reduced.
update learning rate by config table and state table
update learning rate by config table and state table
init optiMethod.
(Since version 0.2.0) Please input SGD instead of Table
Plateau is the learning rate schedule when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. It monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.
quantity to be monitored, can be Loss or score
factor by which the learning rate will be reduced. new_lr = lr * factor
number of epochs with no improvement after which learning rate will be reduced.
one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing
threshold for measuring the new optimum, to only focus on significant changes.
number of epochs to wait before resuming normal operation after lr has been reduced.
lower bound on the learning rate.