Package

com.intel.analytics.bigdl.nn

keras

Permalink

package keras

Linear Supertypes
AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. keras
  2. AnyRef
  3. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Type Members

  1. class Activation[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Simple activation function to be applied to the output.

    Simple activation function to be applied to the output. Available activations: 'tanh', 'relu', 'sigmoid', 'softmax', 'softplus', 'softsign', 'hard_sigmoid'.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  2. class AtrousConvolution1D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Applies an atrous convolution operator for filtering neighborhoods of 1-D inputs.

    Applies an atrous convolution operator for filtering neighborhoods of 1-D inputs. A.k.a dilated convolution or convolution with holes. Bias will be included in this layer. Border mode currently supported for this layer is 'valid'. You can also use AtrousConv1D as an alias of this layer. The input of this layer should be 3D.

    When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  3. class AtrousConvolution2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Applies an atrous convolution operator for filtering windows of 2-D inputs.

    Applies an atrous convolution operator for filtering windows of 2-D inputs. A.k.a dilated convolution or convolution with holes. Bias will be included in this layer. Data format currently supported for this layer is DataFormat.NCHW (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. You can also use AtrousConv2D as an alias of this layer. The input of this layer should be 4D.

    When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension). e.g. input_shape=Shape(3, 128, 128) for 128x128 RGB pictures.

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  4. class AveragePooling1D[T] extends Pooling1D[T]

    Permalink

    Applies average pooling operation for temporal data.

    Applies average pooling operation for temporal data. The input of this layer should be 3D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  5. class AveragePooling2D[T] extends Pooling2D[T]

    Permalink

    Applies average pooling operation for spatial data.

    Applies average pooling operation for spatial data. The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  6. class AveragePooling3D[T] extends Pooling3D[T]

    Permalink

    Applies average pooling operation for 3D data (spatial or spatio-temporal).

    Applies average pooling operation for 3D data (spatial or spatio-temporal). Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. The input of this layer should be 5D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  7. class BatchNormalization[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Batch normalization layer.

    Batch normalization layer. Normalize the activations of the previous layer at each batch, i.e. applies a transformation that maintains the mean activation close to 0 and the activation standard deviation close to 1. It is a feature-wise normalization, each feature map in the input will be normalized separately. The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  8. class Bidirectional[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Bidirectional wrapper for RNNs.

    Bidirectional wrapper for RNNs. Bidirectional currently requires RNNs to return the full sequence, i.e. returnSequences = true.

    When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    Example of creating a bidirectional LSTM: Bidirectiona(LSTM(12, returnSequences = true), mergeMode = "sum", inputShape = Shape(32, 32))

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  9. class ConvLSTM2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Convolutional LSTM.

    Convolutional LSTM. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'same'. The convolution kernel for this layer is a square kernel with equal strides 'subsample'. The input of this layer should be 5D.

    When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  10. class Convolution1D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Applies convolution operator for filtering neighborhoods of 1-D inputs.

    Applies convolution operator for filtering neighborhoods of 1-D inputs. You can also use Conv1D as an alias of this layer. The input of this layer should be 3D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  11. class Convolution2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Applies a 2D convolution over an input image composed of several input planes.

    Applies a 2D convolution over an input image composed of several input planes. You can also use Conv2D as an alias of this layer. The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension), e.g. inputShape=Shape(3, 128, 128) for 128x128 RGB pictures.

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  12. class Convolution3D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Applies convolution operator for filtering windows of three-dimensional inputs.

    Applies convolution operator for filtering windows of three-dimensional inputs. You can also use Conv3D as an alias of this layer. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). The input of this layer should be 5D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension), e.g. inputShape=Shape(3, 10, 128, 128) 10 frames of 128x128 RGB pictures.

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  13. class Cropping1D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Cropping layer for 1D input (e.g.

    Cropping layer for 1D input (e.g. temporal sequence). The input of this layer should be 3D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  14. class Cropping2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Cropping layer for 2D input (e.g.

    Cropping layer for 2D input (e.g. picture). The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  15. class Cropping3D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Cropping layer for 3D data (e.g.

    Cropping layer for 3D data (e.g. spatial or spatio-temporal). The input of this layer should be 5D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  16. class Deconvolution2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Transposed convolution operator for filtering windows of 2-D inputs.

    Transposed convolution operator for filtering windows of 2-D inputs. The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of a normal convolution, i.e., from something that has the shape of the output of some convolution to something that has the shape of its input while maintaining a connectivity pattern that is compatible with said convolution. Data format currently supported for this layer is DataFormat.NCHW (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. You can also use Deconv2D as an alias of this layer. The input of this layer should be 4D.

    When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension). e.g. inputShape=Shape(3, 128, 128) for 128x128 RGB pictures.

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  17. class Dense[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    A densely-connected NN layer.

    A densely-connected NN layer. The most common input is 2D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  18. class Dropout[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Applies Dropout to the input by randomly setting a fraction 'p' of input units to 0 at each update during training time in order to prevent overfitting.

    Applies Dropout to the input by randomly setting a fraction 'p' of input units to 0 at each update during training time in order to prevent overfitting.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  19. class ELU[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Exponential Linear Unit.

    Exponential Linear Unit. It follows: f(x) = alpha * (exp(x) - 1.) for x < 0, f(x) = x for x >= 0.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  20. class Embedding[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Turn positive integers (indexes) into dense vectors of fixed size.

    Turn positive integers (indexes) into dense vectors of fixed size. The input of this layer should be 2D.

    This layer can only be used as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  21. class Flatten[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Flattens the input without affecting the batch size.

    Flattens the input without affecting the batch size.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  22. class GRU[T] extends Recurrent[T]

    Permalink

    Gated Recurrent Unit architecture.

    Gated Recurrent Unit architecture. The input of this layer should be 3D, i.e. (batch, time steps, input dim).

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  23. class GaussianDropout[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Apply multiplicative 1-centered Gaussian noise.

    Apply multiplicative 1-centered Gaussian noise. As it is a regularization layer, it is only active at training time.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  24. class GaussianNoise[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Apply additive zero-centered Gaussian noise.

    Apply additive zero-centered Gaussian noise. This is useful to mitigate overfitting (you could see it as a form of random data augmentation). Gaussian Noise is a natural choice as corruption process for real valued inputs. As it is a regularization layer, it is only active at training time.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  25. class GlobalAveragePooling1D[T] extends GlobalPooling1D[T]

    Permalink

    Applies global average pooling operation for temporal data.

    Applies global average pooling operation for temporal data. The input of this layer should be 3D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  26. class GlobalAveragePooling2D[T] extends GlobalPooling2D[T]

    Permalink

    Applies global average pooling operation for spatial data.

    Applies global average pooling operation for spatial data. The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  27. class GlobalAveragePooling3D[T] extends GlobalPooling3D[T]

    Permalink

    Applies global average pooling operation for 3D data.

    Applies global average pooling operation for 3D data. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. The input of this layer should be 5D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  28. class GlobalMaxPooling1D[T] extends GlobalPooling1D[T]

    Permalink

    Applies global max pooling operation for temporal data.

    Applies global max pooling operation for temporal data. The input of this layer should be 3D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  29. class GlobalMaxPooling2D[T] extends GlobalPooling2D[T]

    Permalink

    Applies global max pooling operation for spatial data.

    Applies global max pooling operation for spatial data. The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  30. class GlobalMaxPooling3D[T] extends GlobalPooling3D[T]

    Permalink

    Applies global max pooling operation for 3D data.

    Applies global max pooling operation for 3D data. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. The input of this layer should be 5D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  31. abstract class GlobalPooling1D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Abstract class for different global pooling 1D layers.

    Abstract class for different global pooling 1D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'GlobalAveragePooling1D' and 'GlobalMaxPooling1D' instead.

  32. abstract class GlobalPooling2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Abstract class for different global pooling 2D layers.

    Abstract class for different global pooling 2D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'GlobalAveragePooling2D' and 'GlobalMaxPooling2D' instead.

  33. abstract class GlobalPooling3D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Abstract class for different global pooling 3D layers.

    Abstract class for different global pooling 3D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'GlobalAveragePooling3D' and 'GlobalMaxPooling3D' instead.

  34. class Highway[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Densely connected highway network.

    Densely connected highway network. Highway layers are a natural extension of LSTMs to feedforward networks. The input of this layer should be 2D, i.e. (batch, input dim).

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  35. class KerasIdentityWrapper[T] extends KerasLayer[Activity, Activity, T]

    Permalink

    Wrap a torch style layer to keras style layer.

    Wrap a torch style layer to keras style layer. This layer can be built multiple times. We are supposing the inputshape and the outputshape keep the same in this layer.

    returns

    a keras compatible layer

  36. abstract class KerasLayer[A <: Activity, B <: Activity, T] extends Container[A, B, T]

    Permalink

    KerasModule is the basic component of all Keras-like Layer.

    KerasModule is the basic component of all Keras-like Layer. It forward activities and backward gradients, and can be mixed with other AbstractMoudule.

    A

    Input data type

    B

    Output data type

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now

    Annotations
    @SerialVersionUID()
  37. trait KerasLayerSerializable extends ContainerSerializable with TKerasSerializerHelper

    Permalink
  38. class KerasLayerWrapper[T] extends KerasLayer[Activity, Activity, T]

    Permalink

    Wrap a torch style layer to keras style layer.

    Wrap a torch style layer to keras style layer. This layer can be built multiple times.

    returns

    a keras compatible layer

  39. abstract class KerasModel[T] extends KerasLayer[Activity, Activity, T]

    Permalink
  40. class LSTM[T] extends Recurrent[T]

    Permalink

    Long Short Term Memory unit architecture.

    Long Short Term Memory unit architecture. The input of this layer should be 3D, i.e. (batch, time steps, input dim).

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  41. class LeakyReLU[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Leaky version of a Rectified Linear Unit.

    Leaky version of a Rectified Linear Unit. It allows a small gradient when the unit is not active: f(x) = alpha * x for x < 0, f(x) = x for x >= 0.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  42. class LocallyConnected1D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Locally-connected layer for 1D inputs which works similarly to the TemporalConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.

    Locally-connected layer for 1D inputs which works similarly to the TemporalConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input. Border mode currently supported for this layer is 'valid'. The input of this layer should be 3D.

    When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  43. class LocallyConnected2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Locally-connected layer for 2D inputs that works similarly to the SpatialConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input.

    Locally-connected layer for 2D inputs that works similarly to the SpatialConvolution layer, except that weights are unshared, that is, a different set of filters is applied at each different patch of the input. The input of this layer should be 4D.

    When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  44. class Masking[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Use a mask value to skip timesteps for a sequence.

    Use a mask value to skip timesteps for a sequence. Masks a sequence by using a mask value to skip timesteps.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  45. class MaxPooling1D[T] extends Pooling1D[T]

    Permalink

    Applies max pooling operation for temporal data.

    Applies max pooling operation for temporal data. The input of this layer should be 3D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now

  46. class MaxPooling2D[T] extends Pooling2D[T]

    Permalink

    Applies max pooling operation for spatial data.

    Applies max pooling operation for spatial data. The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  47. class MaxPooling3D[T] extends Pooling3D[T]

    Permalink

    Applies max pooling operation for 3D data (spatial or spatio-temporal).

    Applies max pooling operation for 3D data (spatial or spatio-temporal). Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). Border mode currently supported for this layer is 'valid'. The input of this layer should be 5D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now

  48. class MaxoutDense[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    A dense maxout layer that takes the element-wise maximum of linear layers.

    A dense maxout layer that takes the element-wise maximum of linear layers. This allows the layer to learn a convex, piecewise linear activation function over the inputs. The input of this layer should be 2D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  49. class Merge[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Used to merge a list of inputs into a single output, following some merge mode.

    Used to merge a list of inputs into a single output, following some merge mode. To merge layers, it must take at least two input layers.

    When using this layer as the first layer in a model, you need to provide the argument inputShape for input layers (each as a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  50. class Permute[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Permutes the dimensions of the input according to a given pattern.

    Permutes the dimensions of the input according to a given pattern. Useful for connecting RNNs and convnets together.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  51. abstract class Pooling1D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Abstract class for different pooling 1D layers.

    Abstract class for different pooling 1D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'AveragePooling1D' and 'MaxPooling1D' instead.

  52. abstract class Pooling2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Abstract class for different pooling 2D layers.

    Abstract class for different pooling 2D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'AveragePooling2D' and 'MaxPooling2D' instead.

  53. abstract class Pooling3D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Abstract class for different pooling 3D layers.

    Abstract class for different pooling 3D layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'AveragePooling3D' and 'MaxPooling3D' instead.

  54. abstract class Recurrent[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    This is the abstract base class for recurrent layers.

    This is the abstract base class for recurrent layers. Do not create a new instance of it or use it in a model. Please use its child classes, 'SimpleRNN', 'LSTM' and 'GRU' instead.

  55. class RepeatVector[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Repeats the input n times.

    Repeats the input n times. The input of this layer should be 2D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  56. class Reshape[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Reshapes an output to a certain shape.

    Reshapes an output to a certain shape. Supports shape inference by allowing one -1 in the target shape. For example, if inputShape = Shape(2, 3, 4), targetShape = Array(3, -1), then outputShape will be Shape(3, 8).

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  57. class SReLU[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    S-shaped Rectified Linear Unit.

    S-shaped Rectified Linear Unit. It follows: f(x) = tr + ar(x - tr) for x >= tr, f(x) = x for tr > x > tl, f(x) = tl + al(x - tl) for x <= tl.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  58. class SeparableConvolution2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Applies separable convolution operator for 2D inputs.

    Applies separable convolution operator for 2D inputs. Separable convolutions consist in first performing a depthwise spatial convolution (which acts on each input channel separately) followed by a pointwise convolution which mixes together the resulting output channels. The depthMultiplier argument controls how many output channels are generated per input channel in the depthwise step. You can also use SeparableConv2D as an alias of this layer. The input of this layer should be 4D.

    When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension). e.g. inputShape=Shape(3, 128, 128) for 128x128 RGB pictures.

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  59. class SimpleRNN[T] extends Recurrent[T]

    Permalink

    A fully-connected recurrent neural network cell.

    A fully-connected recurrent neural network cell. The output is to be fed back to input. The input of this layer should be 3D, i.e. (batch, time steps, input dim).

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  60. class SoftMax[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Just a wrapper class.

    Just a wrapper class. Please use Activation('softmax') instead.

  61. class SpatialDropout1D[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Spatial 1D version of Dropout.

    Spatial 1D version of Dropout. This version performs the same function as Dropout, however it drops entire 1D feature maps instead of individual elements. If adjacent frames within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout1D will help promote independence between feature maps and should be used instead. The input of this layer should be 3D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  62. class SpatialDropout2D[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Spatial 2D version of Dropout.

    Spatial 2D version of Dropout. This version performs the same function as Dropout, however it drops entire 2D feature maps instead of individual elements. If adjacent pixels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout2D will help promote independence between feature maps and should be used instead. The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  63. class SpatialDropout3D[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Spatial 3D version of Dropout.

    Spatial 3D version of Dropout. This version performs the same function as Dropout, however it drops entire 3D feature maps instead of individual elements. If adjacent voxels within feature maps are strongly correlated (as is normally the case in early convolution layers) then regular dropout will not regularize the activations and will otherwise just result in an effective learning rate decrease. In this case, SpatialDropout3D will help promote independence between feature maps and should be used instead. The input of this layer should be 5D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  64. class ThresholdedReLU[T] extends KerasLayer[Tensor[T], Tensor[T], T] with IdentityOutputShape

    Permalink

    Thresholded Rectified Linear Unit.

    Thresholded Rectified Linear Unit. It follows: f(x) = x for x > theta, f(x) = 0 otherwise.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    Numeric type of parameter(e.g. weight, bias). Only support float/double now.

  65. class TimeDistributed[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    TimeDistributed wrapper.

    TimeDistributed wrapper. Apply a layer to every temporal slice of an input. The input should be at least 3D, and the dimension of index one will be considered to be the temporal dimension. When using this layer as the first layer in a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    If you apply TimeDistributed to a Dense layer, you can use: TimeDistributed(Dense(8), inputShape = Shape(10, 12))

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  66. class UpSampling1D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    UpSampling layer for 1D inputs.

    UpSampling layer for 1D inputs. Repeats each temporal step 'length' times along the time axis. The input of this layer should be 3D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  67. class UpSampling2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    UpSampling layer for 2D inputs.

    UpSampling layer for 2D inputs. Repeats the rows and columns of the data by size(0) and size(1) respectively. The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  68. class UpSampling3D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    UpSampling layer for 3D inputs.

    UpSampling layer for 3D inputs. Repeats the 1st, 2nd and 3rd dimensions of the data by size(0), size(1) and size(2) respectively. Data format currently supported for this layer is 'CHANNEL_FIRST' (dimOrdering='th'). The input of this layer should be 5D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  69. class ZeroPadding1D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Zero-padding layer for 1D input (e.g.

    Zero-padding layer for 1D input (e.g. temporal sequence). The input of this layer should be 3D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  70. class ZeroPadding2D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Zero-padding layer for 2D input (e.g.

    Zero-padding layer for 2D input (e.g. picture). The input of this layer should be 4D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  71. class ZeroPadding3D[T] extends KerasLayer[Tensor[T], Tensor[T], T]

    Permalink

    Zero-padding layer for 3D data (spatial or spatio-temporal).

    Zero-padding layer for 3D data (spatial or spatio-temporal). The input of this layer should be 5D.

    When you use this layer as the first layer of a model, you need to provide the argument inputShape (a Single Shape, does not include the batch dimension).

    T

    The numeric type of parameter(e.g. weight, bias). Only support float/double now.

  72. class Input[T] extends KerasLayer[Activity, Activity, T]

    Permalink
    Annotations
    @deprecated
    Deprecated

    (Since version 0.10.0)

  73. class Model[T] extends KerasModel[T]

    Permalink
    Annotations
    @deprecated
    Deprecated

    (Since version 0.10.0)

  74. class Sequential[T] extends KerasModel[T]

    Permalink
    Annotations
    @deprecated
    Deprecated

    (Since version 0.10.0)

Value Members

  1. object Activation extends Serializable

    Permalink
  2. val AtrousConv1D: AtrousConvolution1D.type

    Permalink
  3. val AtrousConv2D: AtrousConvolution2D.type

    Permalink
  4. object AtrousConvolution1D extends Serializable

    Permalink
  5. object AtrousConvolution2D extends Serializable

    Permalink
  6. object AveragePooling1D extends Serializable

    Permalink
  7. object AveragePooling2D extends Serializable

    Permalink
  8. object AveragePooling3D extends Serializable

    Permalink
  9. object BatchNormalization extends Serializable

    Permalink
  10. object Bidirectional extends Serializable

    Permalink
  11. val Conv1D: Convolution1D.type

    Permalink
  12. val Conv2D: Convolution2D.type

    Permalink
  13. val Conv3D: Convolution3D.type

    Permalink
  14. object ConvLSTM2D extends Serializable

    Permalink
  15. object Convolution1D extends Serializable

    Permalink
  16. object Convolution2D extends Serializable

    Permalink
  17. object Convolution3D extends Serializable

    Permalink
  18. object Cropping1D extends Serializable

    Permalink
  19. object Cropping2D extends Serializable

    Permalink
  20. object Cropping3D extends Serializable

    Permalink
  21. val Deconv2D: Deconvolution2D.type

    Permalink
  22. object Deconvolution2D extends Serializable

    Permalink
  23. object Dense extends Serializable

    Permalink
  24. object Dropout extends Serializable

    Permalink
  25. object ELU extends Serializable

    Permalink
  26. object Embedding extends Serializable

    Permalink
  27. object Flatten extends Serializable

    Permalink
  28. object GRU extends Serializable

    Permalink
  29. object GaussianDropout extends Serializable

    Permalink
  30. object GaussianNoise extends Serializable

    Permalink
  31. object GlobalAveragePooling1D extends Serializable

    Permalink
  32. object GlobalAveragePooling2D extends Serializable

    Permalink
  33. object GlobalAveragePooling3D extends Serializable

    Permalink
  34. object GlobalMaxPooling1D extends Serializable

    Permalink
  35. object GlobalMaxPooling2D extends Serializable

    Permalink
  36. object GlobalMaxPooling3D extends Serializable

    Permalink
  37. object Highway extends Serializable

    Permalink
  38. object Input extends Serializable

    Permalink
  39. object InputLayer

    Permalink
  40. object KerasLayerSerializer extends KerasLayerSerializable

    Permalink
  41. object KerasUtils

    Permalink
  42. object LSTM extends Serializable

    Permalink
  43. object LeakyReLU extends Serializable

    Permalink
  44. object LocallyConnected1D extends Serializable

    Permalink
  45. object LocallyConnected2D extends Serializable

    Permalink
  46. object Masking extends Serializable

    Permalink
  47. object MaxPooling1D extends Serializable

    Permalink
  48. object MaxPooling2D extends Serializable

    Permalink
  49. object MaxPooling3D extends Serializable

    Permalink
  50. object MaxoutDense extends Serializable

    Permalink
  51. object Merge extends Serializable

    Permalink
  52. object Model extends KerasLayerSerializable with Serializable

    Permalink
  53. object Permute extends Serializable

    Permalink
  54. object RepeatVector extends Serializable

    Permalink
  55. object Reshape extends Serializable

    Permalink
  56. object SReLU extends Serializable

    Permalink
  57. val SeparableConv2D: SeparableConvolution2D.type

    Permalink
  58. object SeparableConvolution2D extends Serializable

    Permalink
  59. object Sequential extends KerasLayerSerializable with Serializable

    Permalink
  60. object SimpleRNN extends Serializable

    Permalink
  61. object SoftMax extends Serializable

    Permalink
  62. object SpatialDropout1D extends Serializable

    Permalink
  63. object SpatialDropout2D extends Serializable

    Permalink
  64. object SpatialDropout3D extends Serializable

    Permalink
  65. object ThresholdedReLU extends Serializable

    Permalink
  66. object TimeDistributed extends Serializable

    Permalink
  67. object UpSampling1D extends Serializable

    Permalink
  68. object UpSampling2D extends Serializable

    Permalink
  69. object UpSampling3D extends Serializable

    Permalink
  70. object ZeroPadding1D extends Serializable

    Permalink
  71. object ZeroPadding2D extends Serializable

    Permalink
  72. object ZeroPadding3D extends Serializable

    Permalink

Inherited from AnyRef

Inherited from Any

Ungrouped