Represent parameters stored on the block manager. In distributed optimization, we put parameters
on block manager of spark. Each worker syncs parameters through the block manager. Block manager
here serves as a parameter server.
A Tensor is sliced into partitionNum chunks and each chunk is assigned to a particular node
(Spark executor). Likewise, gradients for each chunk are also assigned and stored on separate
nodes. In this way, gradient aggregation and parameter updates can be performed independently for
each chunk on separate nodes.
Represent parameters stored on the block manager. In distributed optimization, we put parameters on block manager of spark. Each worker syncs parameters through the block manager. Block manager here serves as a parameter server.
A Tensor is sliced into
partitionNum
chunks and each chunk is assigned to a particular node (Spark executor). Likewise, gradients for each chunk are also assigned and stored on separate nodes. In this way, gradient aggregation and parameter updates can be performed independently for each chunk on separate nodes.Tensor element type