Class

com.intel.analytics.bigdl.optim.SGD

Plateau

Related Doc: package SGD

Permalink

case class Plateau(monitor: String, factor: Float = 0.1f, patience: Int = 10, mode: String = "min", epsilon: Float = 1e-4f, cooldown: Int = 0, minLr: Float = 0) extends LearningRateSchedule with Product with Serializable

Plateau is the learning rate schedule when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. It monitors a quantity and if no improvement is seen for a 'patience' number of epochs, the learning rate is reduced.

monitor

quantity to be monitored, can be Loss or score

factor

factor by which the learning rate will be reduced. new_lr = lr * factor

patience

number of epochs with no improvement after which learning rate will be reduced.

mode

one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing

epsilon

threshold for measuring the new optimum, to only focus on significant changes.

cooldown

number of epochs to wait before resuming normal operation after lr has been reduced.

minLr

lower bound on the learning rate.

Linear Supertypes
Serializable, Serializable, Product, Equals, LearningRateSchedule, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. Plateau
  2. Serializable
  3. Serializable
  4. Product
  5. Equals
  6. LearningRateSchedule
  7. AnyRef
  8. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new Plateau(monitor: String, factor: Float = 0.1f, patience: Int = 10, mode: String = "min", epsilon: Float = 1e-4f, cooldown: Int = 0, minLr: Float = 0)

    Permalink

    monitor

    quantity to be monitored, can be Loss or score

    factor

    factor by which the learning rate will be reduced. new_lr = lr * factor

    patience

    number of epochs with no improvement after which learning rate will be reduced.

    mode

    one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing

    epsilon

    threshold for measuring the new optimum, to only focus on significant changes.

    cooldown

    number of epochs to wait before resuming normal operation after lr has been reduced.

    minLr

    lower bound on the learning rate.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  5. var best: Float

    Permalink
  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. val cooldown: Int

    Permalink

    number of epochs to wait before resuming normal operation after lr has been reduced.

  8. var currentRate: Double

    Permalink
    Definition Classes
    LearningRateSchedule
  9. val epsilon: Float

    Permalink

    threshold for measuring the new optimum, to only focus on significant changes.

  10. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  11. val factor: Float

    Permalink

    factor by which the learning rate will be reduced.

    factor by which the learning rate will be reduced. new_lr = lr * factor

  12. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  13. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  14. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  15. val minLr: Float

    Permalink

    lower bound on the learning rate.

  16. val mode: String

    Permalink

    one of {min, max}.

    one of {min, max}. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing

  17. val monitor: String

    Permalink

    quantity to be monitored, can be Loss or score

  18. var monitorOp: (Float, Float) ⇒ Boolean

    Permalink
  19. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  20. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  21. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  22. val patience: Int

    Permalink

    number of epochs with no improvement after which learning rate will be reduced.

  23. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  24. def updateHyperParameter[T](optimMethod: SGD[T]): Unit

    Permalink

    update learning rate by config table and state table

    update learning rate by config table and state table

    optimMethod

    init optiMethod.

    Definition Classes
    PlateauLearningRateSchedule
  25. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  26. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def updateHyperParameter(config: Table, state: Table): Unit

    Permalink
    Definition Classes
    LearningRateSchedule
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please input SGD instead of Table

Inherited from Serializable

Inherited from Serializable

Inherited from Product

Inherited from Equals

Inherited from LearningRateSchedule

Inherited from AnyRef

Inherited from Any

Ungrouped