Simple activation function to be applied to the output.
Applies an atrous convolution operator for filtering neighborhoods of 1-D inputs.
Applies an atrous convolution operator for filtering windows of 2-D inputs.
Applies average pooling operation for temporal data.
Applies average pooling operation for spatial data.
Applies average pooling operation for 3D data (spatial or spatio-temporal).
Batch normalization layer.
Bidirectional wrapper for RNNs.
Convolutional LSTM.
Applies convolution operator for filtering neighborhoods of 1-D inputs.
Applies a 2D convolution over an input image composed of several input planes.
Applies convolution operator for filtering windows of three-dimensional inputs.
Cropping layer for 1D input (e.
Cropping layer for 2D input (e.
Cropping layer for 3D data (e.
Transposed convolution operator for filtering windows of 2-D inputs.
A densely-connected NN layer.
Applies Dropout to the input by randomly setting a fraction 'p' of input units to 0 at each update during training time in order to prevent overfitting.
Exponential Linear Unit.
Turn positive integers (indexes) into dense vectors of fixed size.
Flattens the input without affecting the batch size.
Gated Recurrent Unit architecture.
Apply multiplicative 1-centered Gaussian noise.
Apply additive zero-centered Gaussian noise.
Applies global average pooling operation for temporal data.
Applies global average pooling operation for spatial data.
Applies global average pooling operation for 3D data.
Applies global max pooling operation for temporal data.
Applies global max pooling operation for spatial data.
Applies global max pooling operation for 3D data.
Abstract class for different global pooling 1D layers.
Abstract class for different global pooling 2D layers.
Abstract class for different global pooling 3D layers.
Densely connected highway network.
Wrap a torch style layer to keras style layer.
KerasModule is the basic component of all Keras-like Layer.
Wrap a torch style layer to keras style layer.
Long Short Term Memory unit architecture.
Leaky version of a Rectified Linear Unit.
Locally-connected layer for 1D inputs which works similarly to the TemporalConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
Locally-connected layer for 2D inputs that works similarly to the SpatialConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.
Use a mask value to skip timesteps for a sequence.
Applies max pooling operation for temporal data.
Applies max pooling operation for spatial data.
Applies max pooling operation for 3D data (spatial or spatio-temporal).
A dense maxout layer that takes the element-wise maximum of linear layers.
Used to merge a list of inputs into a single output, following some merge mode.
Permutes the dimensions of the input according to a given pattern.
Abstract class for different pooling 1D layers.
Abstract class for different pooling 2D layers.
Abstract class for different pooling 3D layers.
This is the abstract base class for recurrent layers.
Repeats the input n times.
Reshapes an output to a certain shape.
S-shaped Rectified Linear Unit.
Applies separable convolution operator for 2D inputs.
A fully-connected recurrent neural network cell.
Just a wrapper class.
Spatial 1D version of Dropout.
Spatial 2D version of Dropout.
Spatial 3D version of Dropout.
Thresholded Rectified Linear Unit.
TimeDistributed wrapper.
UpSampling layer for 1D inputs.
UpSampling layer for 2D inputs.
UpSampling layer for 3D inputs.
Zero-padding layer for 1D input (e.
Zero-padding layer for 2D input (e.
Zero-padding layer for 3D data (spatial or spatio-temporal).