Class/Object

com.intel.analytics.bigdl.parameters

AllReduceParameter

Related Docs: object AllReduceParameter | package parameters

Permalink

class AllReduceParameter[T] extends Serializable

Represent parameters stored on the block manager. In distributed optimization, we put parameters on block manager of spark. Each worker syncs parameters through the block manager. Block manager here serves as a parameter server.

A Tensor is sliced into partitionNum chunks and each chunk is assigned to a particular node (Spark executor). Likewise, gradients for each chunk are also assigned and stored on separate nodes. In this way, gradient aggregation and parameter updates can be performed independently for each chunk on separate nodes.

T

Tensor element type

Annotations
@SerialVersionUID()
Linear Supertypes
Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. AllReduceParameter
  2. Serializable
  3. Serializable
  4. AnyRef
  5. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AllReduceParameter(id: Long, partitionNum: Int, size: Int, paramOffset: Int = 1, compress: String = "fp16")(implicit arg0: ClassTag[T], ev: TensorNumeric[T])

    Permalink

    id

    distinguish from other parameters

    partitionNum

    how many partitions will use this parameter

    size

    size of the parameter (1D vector)

    paramOffset

    start index in the origin parameter.

Value Members

  1. final def !=(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  4. def aggregateGradientPartition(avgNumbers: Int): Unit

    Permalink

    Retrieve gradients for the slice of the model that this node is responsible for from all the other nodes.

    Retrieve gradients for the slice of the model that this node is responsible for from all the other nodes. A new thread is created for each separate node. The gradients are then summed and then stored in decompressed form in gradientPartition.

    avgNumbers

    average numbers.

  5. final def asInstanceOf[T0]: T0

    Permalink
    Definition Classes
    Any
  6. def clone(): AnyRef

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  7. val compress: String

    Permalink
  8. final def eq(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  9. def equals(arg0: Any): Boolean

    Permalink
    Definition Classes
    AnyRef → Any
  10. def finalize(): Unit

    Permalink
    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  11. final def getClass(): Class[_]

    Permalink
    Definition Classes
    AnyRef → Any
  12. def getWeights(localParameter: Tensor[T]): FutureResult[Int]

    Permalink

    Use a fixed thread pool to launch a thread for each partition of the weights.

    Use a fixed thread pool to launch a thread for each partition of the weights. Each thread requests a partition of the weights from the Spark block manager and copies it into localParameter.

    localParameter

    The Tensor that will hold the retrieved weights.

    returns

    A FutureResult which contains a Future for each thread.

  13. lazy val gradientPartition: Tensor[T]

    Permalink

    Tensor to hold a slice of the global gradients.

  14. def hashCode(): Int

    Permalink
    Definition Classes
    AnyRef → Any
  15. def init(parameter: Tensor[T])(implicit ev: TensorNumeric[T]): (Int, Int, Int)

    Permalink

    This method should be called on each RDD partition before parameter synchronization begins.

    This method should be called on each RDD partition before parameter synchronization begins. An empty gradient tensor is placed in the block manager that can be used to store gradients. A 1 / numPartition fraction of the parameter tensor is copied to the block manager as a compressed tensor.

    parameter

    A tensor representing the initial underlying weights of this AllReduceParameter

  16. final def isInstanceOf[T0]: Boolean

    Permalink
    Definition Classes
    Any
  17. final def ne(arg0: AnyRef): Boolean

    Permalink
    Definition Classes
    AnyRef
  18. final def notify(): Unit

    Permalink
    Definition Classes
    AnyRef
  19. final def notifyAll(): Unit

    Permalink
    Definition Classes
    AnyRef
  20. val paramOffset: Int

    Permalink

    start index in the origin parameter.

  21. def putGradients(parameter: Tensor[T]): Unit

    Permalink

    Slice gradients learned from this partition of data into chunks, and mark each chunk to be sent to the appropriate parameter node, and put it in the block manager.

    Slice gradients learned from this partition of data into chunks, and mark each chunk to be sent to the appropriate parameter node, and put it in the block manager.

    parameter

    A Tensor that contains gradients computed on the entire model on a single partition of data.

  22. def sendWeightPartition(): Unit

    Permalink

    Put the portion of the weights that this partition is responsible for to the block manager.

    Put the portion of the weights that this partition is responsible for to the block manager. Weights are placed locally, then pulled when needed by other partitions.

  23. val size: Int

    Permalink

    size of the parameter (1D vector)

  24. final def synchronized[T0](arg0: ⇒ T0): T0

    Permalink
    Definition Classes
    AnyRef
  25. def toString(): String

    Permalink
    Definition Classes
    AnyRef → Any
  26. final def wait(): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  27. final def wait(arg0: Long, arg1: Int): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  28. final def wait(arg0: Long): Unit

    Permalink
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  29. lazy val weightPartition: Tensor[T]

    Permalink

    Tensor to hold a slice of the global weights.

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped