Simple activation function to be applied to the output.
Simple activation function to be applied to the output. Available activations: 'tanh', 'relu', 'sigmoid', 'softmax', 'softplus', 'softsign', 'hard_sigmoid'.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies an atrous convolution operator for filtering neighborhoods of 1-D inputs.
Applies an atrous convolution operator for filtering neighborhoods of 1-D inputs. A.k.a dilated convolution or convolution with holes. Bias will be included in this layer. Border mode currently supported for this layer is 'valid'. You can also use AtrousConv1D as an alias of this layer. The input of this layer should be 3D.
When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies an atrous convolution operator for filtering windows of 2-D inputs.
Applies an atrous convolution operator for filtering windows of 2-D inputs. A.k.a dilated convolution or convolution with holes. Bias will be included in this layer. Data format currently supported for this layer is DataFormat.NCHW (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. You can also use AtrousConv2D as an alias of this layer. The input of this layer should be 4D.
When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension). e.g. input_shape=Shape(3, 128, 128) for 128x128 RGB pictures.
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies average pooling operation for temporal data.
Applies average pooling operation for temporal data. The input of this layer should be 3D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies average pooling operation for spatial data.
Applies average pooling operation for spatial data. The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies average pooling operation for 3D data (spatial or spatio-temporal).
Applies average pooling operation for 3D data (spatial or spatio-temporal). Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. The input of this layer should be 5D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Batch normalization layer.
Batch normalization layer. Normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1. It is a feature-wise normalization, each feature map in the input will be normalized separately. The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Bidirectional wrapper for RNNs.
Bidirectional wrapper for RNNs. Bidirectional currently requires RNNs to return the full sequence, i.e. returnSequences = true.
When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Example of creating a bidirectional LSTM: Bidirectiona(LSTM(12, returnSequences = true), mergeMode = "sum", inputShape = Shape(32, 32))
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Convolutional LSTM.
Convolutional LSTM. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'same'. The convolution kernel for this layer is a square kernel with equal strides 'subsample'. The input of this layer should be 5D.
When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies convolution operator for filtering neighborhoods of 1-D inputs.
Applies convolution operator for filtering neighborhoods of 1-D inputs. You can also use Conv1D as an alias of this layer. The input of this layer should be 3D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies a 2D convolution over an input image composed of several input planes.
Applies a 2D convolution over an input image composed of several input planes. You can also use Conv2D as an alias of this layer. The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension), e.g. inputShape=Shape(3, 128, 128) for 128x128 RGB pictures.
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies convolution operator for filtering windows of three-dimensional inputs.
Applies convolution operator for filtering windows of three-dimensional inputs. You can also use Conv3D as an alias of this layer. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). The input of this layer should be 5D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension), e.g. inputShape=Shape(3, 10, 128, 128) 10 frames of 128x128 RGB pictures.
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Cropping layer for 1D input (e.g.
Cropping layer for 1D input (e.g. temporal sequence). The input of this layer should be 3D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Cropping layer for 2D input (e.g.
Cropping layer for 2D input (e.g. picture). The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Cropping layer for 3D data (e.g.
Cropping layer for 3D data (e.g. spatial or spatio-temporal). The input of this layer should be 5D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Transposed convolution operator for filtering windows of 2-D inputs.
Transposed convolution operator for filtering windows of 2-D inputs. The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. Data format currently supported for this layer is DataFormat.NCHW (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. You can also use Deconv2D as an alias of this layer. The input of this layer should be 4D.
When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension). e.g. inputShape=Shape(3, 128, 128) for 128x128 RGB pictures.
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
A densely-connected NN layer.
A densely-connected NN layer. The most common input is 2D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies Dropout to the input by randomly setting a fraction 'p' of input units to 0 at each update during training time in order to prevent overfitting.
Applies Dropout to the input by randomly setting a fraction 'p' of input units to 0 at each update during training time in order to prevent overfitting.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Exponential Linear Unit.
Exponential Linear Unit. It follows: f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Turn positive integers (indexes) into dense vectors of fixed size.
Turn positive integers (indexes) into dense vectors of fixed size. The input of this layer should be 2D.
This layer can only be used as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Flattens the input without affecting the batch size.
Flattens the input without affecting the batch size.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Gated Recurrent Unit architecture.
Gated Recurrent Unit architecture. The input of this layer should be 3D, i.e. (batch, time steps, input dim).
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Apply multiplicative 1-centered Gaussian noise.
Apply multiplicative 1-centered Gaussian noise. As it is a regularization layer, it is only active at training time.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Apply additive zero-centered Gaussian noise.
Apply additive zero-centered Gaussian noise. This is useful to mitigate overfitting (you could see it as a form of random data augmentation). Gaussian Noise is a natural choice as corruption process for real valued inputs. As it is a regularization layer, it is only active at training time.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies global average pooling operation for temporal data.
Applies global average pooling operation for temporal data. The input of this layer should be 3D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies global average pooling operation for spatial data.
Applies global average pooling operation for spatial data. The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies global average pooling operation for 3D data.
Applies global average pooling operation for 3D data. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. The input of this layer should be 5D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies global max pooling operation for temporal data.
Applies global max pooling operation for temporal data. The input of this layer should be 3D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies global max pooling operation for spatial data.
Applies global max pooling operation for spatial data. The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies global max pooling operation for 3D data.
Applies global max pooling operation for 3D data. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. The input of this layer should be 5D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Abstract class for different global pooling 1D layers.
Abstract class for different global pooling 1D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'GlobalAveragePooling1D' and 'GlobalMaxPooling1D' instead.
Abstract class for different global pooling 2D layers.
Abstract class for different global pooling 2D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'GlobalAveragePooling2D' and 'GlobalMaxPooling2D' instead.
Abstract class for different global pooling 3D layers.
Abstract class for different global pooling 3D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'GlobalAveragePooling3D' and 'GlobalMaxPooling3D' instead.
Densely connected highway network.
Densely connected highway network. Highway layers are a natural extension of LSTMs to feedforward networks. The input of this layer should be 2D, i.e. (batch, input dim).
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Wrap a torch style layer to keras style layer.
Wrap a torch style layer to keras style layer. This layer can be built multiple times. We are supposing the inputshape and the outputshape keep the same in this layer.
a keras compatible layer
KerasModule is the basic component of all Keras-like Layer.
KerasModule is the basic component of all Keras-like Layer. It forward activities and backward gradients, and can be mixed with other AbstractMoudule.
Input data type
Output data type
Numeric type of parameter(e.g. weight, bias). Only support float/double now
Wrap a torch style layer to keras style layer.
Wrap a torch style layer to keras style layer. This layer can be built multiple times.
a keras compatible layer
Long Short Term Memory unit architecture.
Long Short Term Memory unit architecture. The input of this layer should be 3D, i.e. (batch, time steps, input dim).
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Leaky version of a Rectified Linear Unit.
Leaky version of a Rectified Linear Unit. It allows a small gradient when the unit is not active: f(x) = alpha * x for x < 0, f(x) = x for x >= 0.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Locally-connected layer for 1D inputs which works similarly to the TemporalConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
Locally-connected layer for 1D inputs which works similarly to the TemporalConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input. Border mode currently supported for this layer is 'valid'. The input of this layer should be 3D.
When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Locally-connected layer for 2D inputs that works similarly to the SpatialConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
Locally-connected layer for 2D inputs that works similarly to the SpatialConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input. The input of this layer should be 4D.
When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Use a mask value to skip timesteps for a sequence.
Use a mask value to skip timesteps for a sequence. Masks a sequence by using a mask value to skip timesteps.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies max pooling operation for temporal data.
Applies max pooling operation for temporal data. The input of this layer should be 3D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now
Applies max pooling operation for spatial data.
Applies max pooling operation for spatial data. The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies max pooling operation for 3D data (spatial or spatio-temporal).
Applies max pooling operation for 3D data (spatial or spatio-temporal). Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. The input of this layer should be 5D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now
A dense maxout layer that takes the element-wise maximum of linear layers.
A dense maxout layer that takes the element-wise maximum of linear layers. This allows the layer to learn a convex, piecewise linear activation function over the inputs. The input of this layer should be 2D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Used to merge a list of inputs into a single output, following some merge mode.
Used to merge a list of inputs into a single output, following some merge mode. To merge layers, it must take at least two input layers.
When using this layer as the first layer in a model, you need to provide the argument inputShape for input layers (each as a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Permutes the dimensions of the input according to a given pattern.
Permutes the dimensions of the input according to a given pattern. Useful for connecting RNNs and convnets together.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Abstract class for different pooling 1D layers.
Abstract class for different pooling 1D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'AveragePooling1D' and 'MaxPooling1D' instead.
Abstract class for different pooling 2D layers.
Abstract class for different pooling 2D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'AveragePooling2D' and 'MaxPooling2D' instead.
Abstract class for different pooling 3D layers.
Abstract class for different pooling 3D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'AveragePooling3D' and 'MaxPooling3D' instead.
This is the abstract base class for recurrent layers.
This is the abstract base class for recurrent layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'SimpleRNN', 'LSTM' and 'GRU' instead.
Repeats the input n times.
Repeats the input n times. The input of this layer should be 2D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Reshapes an output to a certain shape.
Reshapes an output to a certain shape. Supports shape inference by allowing one -1 in the target shape. For example, if inputShape = Shape(2, 3, 4), targetShape = Array(3, -1), then outputShape will be Shape(3, 8).
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
S-shaped Rectified Linear Unit.
S-shaped Rectified Linear Unit. It follows: f(x) = tr + ar(x - tr) for x >= tr, f(x) = x for tr > x > tl, f(x) = tl + al(x - tl) for x <= tl.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Applies separable convolution operator for 2D inputs.
Applies separable convolution operator for 2D inputs. Separable convolutions consist in first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. The depthMultiplier argument controls how many output channels are generated per input channel in the depthwise step. You can also use SeparableConv2D as an alias of this layer. The input of this layer should be 4D.
When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension). e.g. inputShape=Shape(3, 128, 128) for 128x128 RGB pictures.
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
A fully-connected recurrent neural network cell.
A fully-connected recurrent neural network cell. The output is to be fed back to input. The input of this layer should be 3D, i.e. (batch, time steps, input dim).
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Just a wrapper class.
Just a wrapper class. Please use Activation('softmax') instead.
Spatial 1D version of Dropout.
Spatial 1D version of Dropout. This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements. If adjacent frames within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout1D will help promote independence between feature maps and should be used instead. The input of this layer should be 3D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Spatial 2D version of Dropout.
Spatial 2D version of Dropout. This version performs the same function as Dropout, however it drops entire 2D feature maps instead of individual elements. If adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout2D will help promote independence between feature maps and should be used instead. The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Spatial 3D version of Dropout.
Spatial 3D version of Dropout. This version performs the same function as Dropout, however it drops entire 3D feature maps instead of individual elements. If adjacent voxels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout3D will help promote independence between feature maps and should be used instead. The input of this layer should be 5D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
Thresholded Rectified Linear Unit.
Thresholded Rectified Linear Unit. It follows: f(x) = x for x > theta, f(x) = 0 otherwise.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
Numeric type of parameter(e.g. weight, bias). Only support float/double now.
TimeDistributed wrapper.
TimeDistributed wrapper. Apply a layer to every temporal slice of an input. The input should be at least 3D, and the dimension of index one will be considered to be the temporal dimension. When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
If you apply TimeDistributed to a Dense layer, you can use: TimeDistributed(Dense(8), inputShape = Shape(10, 12))
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
UpSampling layer for 1D inputs.
UpSampling layer for 1D inputs. Repeats each temporal step 'length' times along the time axis. The input of this layer should be 3D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
UpSampling layer for 2D inputs.
UpSampling layer for 2D inputs. Repeats the rows and columns of the data by size(0) and size(1) respectively. The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
UpSampling layer for 3D inputs.
UpSampling layer for 3D inputs. Repeats the 1st, 2nd and 3rd dimensions of the data by size(0), size(1) and size(2) respectively. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). The input of this layer should be 5D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Zero-padding layer for 1D input (e.g.
Zero-padding layer for 1D input (e.g. temporal sequence). The input of this layer should be 3D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Zero-padding layer for 2D input (e.g.
Zero-padding layer for 2D input (e.g. picture). The input of this layer should be 4D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
Zero-padding layer for 3D data (spatial or spatio-temporal).
Zero-padding layer for 3D data (spatial or spatio-temporal). The input of this layer should be 5D.
When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).
The numeric type of parameter(e.g. weight, bias). Only support float/double now.
(Since version 0.10.0)
(Since version 0.10.0)
(Since version 0.10.0)