bigdl.nn.onnx package¶
Submodules¶
bigdl.nn.onnx.layer module¶
-
class
bigdl.nn.onnx.layer.
Constant
(value, bigdl_type='float')[source]¶ Bases:
bigdl.nn.layer.Layer
>>> value = np.random.random((3, 3)) >>> constant = Constant(value) creating: createConstant
-
class
bigdl.nn.onnx.layer.
Gather
(bigdl_type='float')[source]¶ Bases:
bigdl.nn.layer.Layer
>>> constant = Gather() creating: createGather
-
class
bigdl.nn.onnx.layer.
Gemm
(matrix_b, matrix_c, alpha=1.0, beta=1.0, trans_a=0, trans_b=0, bigdl_type='float')[source]¶ Bases:
bigdl.nn.layer.Layer
General Matrix multiplication: https://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms#Level_3
A’ = transpose(A) if transA else A B’ = transpose(B) if transB else B
Compute Y = alpha * A’ * B’ + beta * C, where input tensor A has shape (M, K) or (K, M), input tensor B has shape (K, N) or (N, K), input tensor C is broadcastable to shape (M, N), and output tensor Y has shape (M, N). A will be transposed before doing the computation if attribute transA is non-zero, same for B and transB.
>>> matrix_b = np.random.random([2, 2]) >>> matrix_c = np.random.random([2, 2]) >>> gemm = Gemm(matrix_b=matrix_b, matrix_c=matrix_c) creating: createGemm
-
class
bigdl.nn.onnx.layer.
Reshape
(shape=None, bigdl_type='float')[source]¶ Bases:
bigdl.nn.layer.Layer
A layer which takes a tensor as input and outputs an 1D tensor containing the shape of the input. >>> shape = (2, 2) >>> reshape = Reshape(shape) creating: createReshape
-
class
bigdl.nn.onnx.layer.
Shape
(bigdl_type='float')[source]¶ Bases:
bigdl.nn.layer.Layer
A layer which takes a tensor as input and outputs an 1D tensor containing the shape of the input.
>>> shape = Shape() creating: createShape