Package

com.intel.analytics.bigdl.nn

mkldnn

Permalink

package mkldnn

Visibility
  1. Public
  2. All

Type Members

  1. class AvgPooling extends AbstractModule[Activity, Activity, Float] with MklDnnLayer

    Permalink
  2. class CAddTable extends AbstractModule[Activity, Activity, Float] with MklDnnLayer with MklInt8Convertible

    Permalink
  3. class ConcatTable extends DynamicContainer[Activity, Activity, Float] with MklDnnContainer with MklInt8Convertible

    Permalink
  4. class DnnGraph extends Graph[Float] with MklDnnLayer with MklInt8Convertible

    Permalink
  5. class Dropout extends AbstractModule[Activity, Activity, Float] with MklDnnLayer

    Permalink
  6. case class HeapData(_shape: Array[Int], _layout: Int, _dataType: Int = DataType.F32) extends MemoryData with Product with Serializable

    Permalink
  7. class Identity extends AbstractModule[Activity, Activity, Float] with MklDnnLayer

    Permalink

    Identity just return the input to output.

    Identity just return the input to output. It's useful in same parallel container to get an origin input.

  8. class Input extends ReorderMemory

    Permalink
  9. class JoinTable extends AbstractModule[Activity, Activity, Float] with MklDnnLayer

    Permalink
  10. class LRN extends AbstractModule[Activity, Activity, Float] with MklDnnLayer

    Permalink
  11. class Linear extends AbstractModule[Activity, Activity, Float] with MklDnnLayer with Initializable with MklInt8Convertible

    Permalink
  12. class MaxPooling extends AbstractModule[Activity, Activity, Float] with MklDnnLayer

    Permalink
  13. sealed trait MemoryData extends Serializable

    Permalink
  14. trait MklDnnContainer extends DynamicContainer[Activity, Activity, Float] with MklDnnModule

    Permalink

    Helper utilities when integrating containers with MKL-DNN

  15. trait MklDnnLayer extends AbstractModule[Activity, Activity, Float] with MklDnnModule

    Permalink
  16. trait MklDnnModule extends MklDnnModuleHelper

    Permalink

    Helper utilities when integrating Module with MKL-DNN

  17. trait MklDnnModuleHelper extends MemoryOwner

    Permalink
  18. abstract class MklDnnNativeMemory extends Releasable

    Permalink
  19. class MklDnnRuntime extends AnyRef

    Permalink
  20. class MklMemoryAttr extends MklDnnNativeMemory

    Permalink
  21. class MklMemoryDescInit extends MklDnnNativeMemory

    Permalink
  22. class MklMemoryPostOps extends MklDnnNativeMemory

    Permalink
  23. class MklMemoryPrimitive extends MklDnnNativeMemory

    Permalink
  24. class MklMemoryPrimitiveDesc extends MklDnnNativeMemory

    Permalink
  25. case class NativeData(_shape: Array[Int], _layout: Int, _dataType: Int = DataType.F32) extends MemoryData with Product with Serializable

    Permalink
  26. class Output extends AbstractModule[Activity, Activity, Float] with MklDnnLayer

    Permalink

    Convert output to user defined layout and appoint gradOutput layout

  27. sealed class Phase extends AnyRef

    Permalink
  28. class RNN extends AbstractModule[Activity, Activity, Float] with MklDnnLayer with Initializable

    Permalink

  29. class ReLU extends AbstractModule[Activity, Activity, Float] with MklDnnLayer with MklInt8Convertible

    Permalink
  30. class ReorderMemory extends AbstractModule[Activity, Activity, Float] with MklDnnLayer with Releasable

    Permalink
  31. case class ResNet50PerfParams(batchSize: Int = 16, iteration: Int = 50, training: Boolean = true, model: String = "vgg16") extends Product with Serializable

    Permalink
  32. class SelectTable extends AbstractModule[Activity, Activity, Float] with MklDnnLayer

    Permalink

    Creates a module that takes a table as input and outputs the element at index index (positive or negative).

    Creates a module that takes a table as input and outputs the element at index index (positive or negative). This can be either a table or a Tensor. The gradients of the non-index elements are zeroed Tensors of the same size. This is true regardless of the depth of the encapsulated Tensor as the function used internally to do so is recursive.

    Annotations
    @SerialVersionUID()
  33. class Sequential extends DynamicContainer[Activity, Activity, Float] with MklDnnContainer with MklInt8Convertible

    Permalink
  34. class SoftMax extends AbstractModule[Activity, Activity, Float] with MklDnnLayer

    Permalink
  35. class SpatialBatchNormalization extends AbstractModule[Activity, Activity, Float] with MklDnnLayer with Initializable with MklInt8Convertible

    Permalink
  36. class SpatialConvolution extends AbstractModule[Activity, Activity, Float] with MklDnnLayer with Initializable with Serializable with MklInt8Convertible

    Permalink

    Applies a 2D convolution over an input image composed of several input planes.

    Applies a 2D convolution over an input image composed of several input planes. The input tensor in forward(input) is expected to be a 3D tensor (nInputPlane x height x width).

    nInputPlane The number of expected input planes in the image given into forward() nOutputPlane: The number of output planes the convolution layer will produce. kernelW: the kernel width of the convolution kernelH: The kernel height of the convolution strideW: Int = 1, The step of the convolution in the width dimension. strideH: Int = 1, The step of the convolution in the height dimension padW: Int = 0, The additional zeros added per width to the input planes. padH: Int = 0, The additional zeros added per height to the input planes. nGroup: Int = 1, Kernel group number propagateBack: Boolean = true, propagate gradient back wRegularizer: Regularizer[Float] = null, bRegularizer: Regularizer[Float] = null, initWeight: Tensor[Float] = null, initBias: Tensor[Float] = null, initGradWeight: Tensor[Float] = null, initGradBias: Tensor[Float] = null, withBias: Boolean = true, format: DataFormat = DataFormat.NCHW, dilationW: Int = 1, dilationH: Int = 1

    When padW and padH are both -1, we use a padding algorithm similar to the "SAME" padding of tensorflow. That is

    outHeight = Math.ceil(inHeight.toFloat/strideH.toFloat) outWidth = Math.ceil(inWidth.toFloat/strideW.toFloat)

    padAlongHeight = Math.max(0, (outHeight - 1) * strideH + kernelH - inHeight) padAlongWidth = Math.max(0, (outWidth - 1) * strideW + kernelW - inWidth)

    padTop = padAlongHeight / 2 padLeft = padAlongWidth / 2

Value Members

  1. object AvgPooling extends Serializable

    Permalink
  2. object CAddTable extends Serializable

    Permalink
  3. object ConcatTable extends Serializable

    Permalink
  4. object Convolution

    Permalink
  5. object DnnGraph extends Serializable

    Permalink
  6. object Dropout extends Serializable

    Permalink
  7. object Identity extends Serializable

    Permalink
  8. object Input extends Serializable

    Permalink
  9. object JoinTable extends Serializable

    Permalink
  10. object LRN extends Serializable

    Permalink
  11. object Linear extends Serializable

    Permalink
  12. object MaxPooling extends Serializable

    Permalink
  13. object MklDnnMemory

    Permalink
  14. object Output extends Serializable

    Permalink
  15. object Perf

    Permalink
  16. object Phase

    Permalink
  17. object RNN extends Serializable

    Permalink
  18. object ReLU extends Serializable

    Permalink
  19. object ReorderMemory extends Serializable

    Permalink
  20. object ResNet

    Permalink
  21. object SbnDnn

    Permalink
  22. object Scale

    Permalink
  23. object SelectTable extends Serializable

    Permalink
  24. object Sequential extends Serializable

    Permalink
  25. object SoftMax extends Serializable

    Permalink
  26. object SpatialBatchNormalization extends Serializable

    Permalink
  27. object SpatialConvolution extends Serializable

    Permalink
  28. package models

    Permalink

Ungrouped