com.intel.analytics.bigdl.optim.SGD

Plateau

case class Plateau(monitor: String, factor: Float = 0.1, patience: Int = 10, mode: String = "min", epsilon: Float = 1.0E-4, cooldown: Int = 0, minLr: Float = 0) extends LearningRateSchedule with Product with Serializable

Plateau is the learning rate schedule when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. It monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

monitor

quantity to be monitored, can be Loss or score

factor

factor by which the learning rate will be reduced. new_lr = lr * factor

patience

number of epochs with no improvement after which learning rate will be reduced.

mode

one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing

epsilon

threshold for measuring the new optimum, to only focus on significant changes.

cooldown

number of epochs to wait before resuming normal operation after lr has been reduced.

minLr

lower bound on the learning rate.

Linear Supertypes
Serializable, Serializable, Product, Equals, LearningRateSchedule, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. Plateau
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. LearningRateSchedule
  7. AnyRef
  8. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new Plateau(monitor: String, factor: Float = 0.1, patience: Int = 10, mode: String = "min", epsilon: Float = 1.0E-4, cooldown: Int = 0, minLr: Float = 0)

    monitor

    quantity to be monitored, can be Loss or score

    factor

    factor by which the learning rate will be reduced. new_lr = lr * factor

    patience

    number of epochs with no improvement after which learning rate will be reduced.

    mode

    one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing

    epsilon

    threshold for measuring the new optimum, to only focus on significant changes.

    cooldown

    number of epochs to wait before resuming normal operation after lr has been reduced.

    minLr

    lower bound on the learning rate.

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  7. var best: Float

  8. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. val cooldown: Int

    number of epochs to wait before resuming normal operation after lr has been reduced.

  10. var currentRate: Double

    Definition Classes
    LearningRateSchedule
  11. val epsilon: Float

    threshold for measuring the new optimum, to only focus on significant changes.

  12. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  13. val factor: Float

    factor by which the learning rate will be reduced.

    factor by which the learning rate will be reduced. new_lr = lr * factor

  14. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  15. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  16. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  17. val minLr: Float

    lower bound on the learning rate.

  18. val mode: String

    one of {min, max}.

    one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing

  19. val monitor: String

    quantity to be monitored, can be Loss or score

  20. var monitorOp: (Float, Float) ⇒ Boolean

  21. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  22. final def notify(): Unit

    Definition Classes
    AnyRef
  23. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  24. val patience: Int

    number of epochs with no improvement after which learning rate will be reduced.

  25. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  26. def updateHyperParameter[T](optimMethod: SGD[T]): Unit

    update learning rate by config table and state table

    update learning rate by config table and state table

    optimMethod

    init optiMethod.

    Definition Classes
    PlateauLearningRateSchedule
  27. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  28. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  29. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def updateHyperParameter(config: Table, state: Table): Unit

    Definition Classes
    LearningRateSchedule
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please input SGD instead of Table

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from LearningRateSchedule

Inherited from AnyRef

Inherited from Any

Ungrouped