BigDL module to be optimized
BigDL criterion method
The size (Tensor dimensions) of the feature data. e.g. an image may be with width * height = 28 * 28, featureSize = Array(28, 28).
The size (Tensor dimensions) of the label data.
BigDL criterion method
The size (Tensor dimensions) of the feature data.
The size (Tensor dimensions) of the feature data. e.g. an image may be with width * height = 28 * 28, featureSize = Array(28, 28).
The size (Tensor dimensions) of the label data.
learning rate for the optimizer in the DLEstimator.
learning rate for the optimizer in the DLEstimator. Default: 0.001
learning rate decay.
learning rate decay. Default: 0
number of max Epoch for the training, an epoch refers to a traverse over the training data Default: 100
BigDL module to be optimized
optimization method to be used.
optimization method to be used. BigDL supports many optimization methods like Adam, SGD and LBFGS. Refer to package com.intel.analytics.bigdl.optim for all the options. Default: SGD
sub classes can extend the method and return required model for different transform tasks
sub classes can extend the method and return required model for different transform tasks
DLEstimator helps to train a BigDL Model with the Spark ML Estimator/Transfomer pattern, thus Spark users can conveniently fit BigDL into Spark ML pipeline.
DLEstimator supports feature and label data in the format of Array[Double], Array[Float], org.apache.spark.mllib.linalg.{Vector, VectorUDT} for Spark 1.5, 1.6 and org.apache.spark.ml.linalg.{Vector, VectorUDT} for Spark 2.0+. Also label data can be of DoubleType. User should specify the feature data dimensions and label data dimensions via the constructor parameters featureSize and labelSize respectively. Internally the feature and label data are converted to BigDL tensors, to further train a BigDL model efficiently.
For details usage, please refer to examples in package com.intel.analytics.bigdl.example.MLPipeline