com.intel.analytics.bigdl.optim

ParallelAdam

class ParallelAdam[T] extends OptimMethod[T]

An multi-thread implementation of Adam http://arxiv.org/pdf/1412.6980.pdf

T

Linear Supertypes
OptimMethod[T], Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. ParallelAdam
  2. OptimMethod
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new ParallelAdam(learningRate: Double = 0.001, learningRateDecay: Double = 0.0, beta1: Double = 0.9, beta2: Double = 0.999, Epsilon: Double = 1.0E-8, parallelNum: Int = ...)(implicit arg0: ClassTag[T], ev: TensorNumeric[T])

    learningRate

    learning rate

    learningRateDecay

    learning rate decay

    beta1

    first moment coefficient

    beta2

    second moment coefficient

    Epsilon

    for numerical stability

    parallelNum

    parallelism number, default is core number.

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. var Epsilon: Double

    for numerical stability

  7. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  8. var beta1: Double

    first moment coefficient

  9. var beta2: Double

    second moment coefficient

  10. def clearHistory(): Unit

    Clear the history information in the OptimMethod state

    Clear the history information in the OptimMethod state

    returns

    Definition Classes
    ParallelAdamOptimMethod
  11. def clone(): OptimMethod[T]

    clone OptimMethod

    clone OptimMethod

    returns

    Definition Classes
    OptimMethod → AnyRef
  12. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  13. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  14. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  15. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  16. def getHyperParameter(): String

    Get hyper parameter from config table.

    Get hyper parameter from config table.

    Definition Classes
    OptimMethod
  17. def getLearningRate(): Double

    get learning rate

    get learning rate

    returns

    Definition Classes
    ParallelAdamOptimMethod
  18. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  19. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  20. var learningRate: Double

    learning rate

  21. var learningRateDecay: Double

    learning rate decay

  22. def loadFromTable(config: Table): ParallelAdam.this.type

    load optimMethod parameters from Table

    load optimMethod parameters from Table

    config
    returns

    Definition Classes
    ParallelAdamOptimMethod
  23. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  24. final def notify(): Unit

    Definition Classes
    AnyRef
  25. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  26. def optimize(feval: (Tensor[T]) ⇒ (T, Tensor[T]), parameter: Tensor[T]): (Tensor[T], Array[T])

    An implementation of Adam http://arxiv.

    An implementation of Adam http://arxiv.org/pdf/1412.6980.pdf

    feval

    a function that takes a single input (X), the point of a evaluation, and returns f(X) and df/dX

    parameter

    the initial point

    returns

    the new x vector and the function list {fx}, evaluated before the update

    Definition Classes
    ParallelAdamOptimMethod
  27. var parallelNum: Int

    parallelism number, default is core number.

  28. def save(path: String, overWrite: Boolean = false): ParallelAdam.this.type

    save OptimMethod

    save OptimMethod

    path

    path

    overWrite

    whether to overwrite

    returns

    Definition Classes
    OptimMethod
  29. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  30. def toString(): String

    Definition Classes
    AnyRef → Any
  31. def updateHyperParameter(): Unit

    Update hyper parameter.

    Update hyper parameter. We have updated hyper parameter in method optimize(). But in DistriOptimizer, the method optimize() is only called on the executor side, the driver's hyper parameter is unchanged. So this method is using to update hyper parameter on the driver side.

    returns

    A string.

    Definition Classes
    OptimMethod
  32. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  33. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  34. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Deprecated Value Members

  1. def clearHistory(state: Table): Table

    Clear the history information in the state

    Clear the history information in the state

    state
    returns

    Definition Classes
    OptimMethod
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please use clearHistory() instead

  2. def getHyperParameter(config: Table): String

    Get hyper parameter from config table.

    Get hyper parameter from config table.

    config

    a table contains the hyper parameter.

    Definition Classes
    OptimMethod
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please use getHyperParameter() instead

  3. def optimize(feval: (Tensor[T]) ⇒ (T, Tensor[T]), parameter: Tensor[T], config: Table, state: Table = null): (Tensor[T], Array[T])

    Optimize the model parameter

    Optimize the model parameter

    feval

    a function that takes a single input (X), the point of a evaluation, and returns f(X) and df/dX

    parameter

    the initial point

    config

    a table with configuration parameters for the optimizer

    state

    a table describing the state of the optimizer; after each call the state is modified

    returns

    the new x vector and the function list, evaluated before the update

    Definition Classes
    OptimMethod
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please initialize OptimMethod with parameters when creating it instead of importing table

  4. def updateHyperParameter(config: Table, state: Table): Unit

    Update hyper parameter.

    Update hyper parameter. We have updated hyper parameter in method optimize(). But in DistriOptimizer, the method optimize() is only called on the executor side, the driver's hyper parameter is unchanged. So this method is using to update hyper parameter on the driver side.

    config

    config table.

    state

    state Table.

    returns

    A string.

    Definition Classes
    OptimMethod
    Annotations
    @deprecated
    Deprecated

    (Since version 0.2.0) Please use updateHyperParameter() instead

Inherited from OptimMethod[T]

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped