Identity just return the input to output.
Helper utilities when integrating containers with MKL-DNN
Helper utilities when integrating Module with MKL-DNN
Convert output to user defined layout and appoint gradOutput layout
Creates a module that takes a table as input and outputs the element at index index
(positive or negative).
Creates a module that takes a table as input and outputs the element at index index
(positive or negative). This can be either a table or a Tensor.
The gradients of the non-index elements are zeroed Tensors of the same size.
This is true regardless of the depth of the encapsulated Tensor as the function used
internally to do so is recursive.
Applies a 2D convolution over an input image composed of several input planes.
Applies a 2D convolution over an input image composed of several input planes. The input tensor in forward(input) is expected to be a 3D tensor (nInputPlane x height x width).
nInputPlane The number of expected input planes in the image given into forward() nOutputPlane: The number of output planes the convolution layer will produce. kernelW: the kernel width of the convolution kernelH: The kernel height of the convolution strideW: Int = 1, The step of the convolution in the width dimension. strideH: Int = 1, The step of the convolution in the height dimension padW: Int = 0, The additional zeros added per width to the input planes. padH: Int = 0, The additional zeros added per height to the input planes. nGroup: Int = 1, Kernel group number propagateBack: Boolean = true, propagate gradient back wRegularizer: Regularizer[Float] = null, bRegularizer: Regularizer[Float] = null, initWeight: Tensor[Float] = null, initBias: Tensor[Float] = null, initGradWeight: Tensor[Float] = null, initGradBias: Tensor[Float] = null, withBias: Boolean = true, format: DataFormat = DataFormat.NCHW, dilationW: Int = 1, dilationH: Int = 1
When padW and padH are both -1, we use a padding algorithm similar to the "SAME" padding of tensorflow. That is
outHeight = Math.ceil(inHeight.toFloat/strideH.toFloat) outWidth = Math.ceil(inWidth.toFloat/strideW.toFloat)
padAlongHeight = Math.max(0, (outHeight - 1) * strideH + kernelH - inHeight) padAlongWidth = Math.max(0, (outWidth - 1) * strideW + kernelW - inWidth)
padTop = padAlongHeight / 2 padLeft = padAlongWidth / 2
Identity just return the input to output. It's useful in same parallel container to get an origin input.