trainned BigDL models to use in prediction.
The size (Tensor dimensions) of the feature data. (e.g. an image may be with featureSize = 28 * 28).
When to stop the training, passed in a Trigger.
When to stop the training, passed in a Trigger. E.g. Trigger.maxIterations
The size (Tensor dimensions) of the feature data.
The size (Tensor dimensions) of the feature data. (e.g. an image may be with featureSize = 28 * 28).
Get conversion function to extract data from original DataFrame Default: 0
Get conversion function to extract data from original DataFrame Default: 0
Perform a prediction on featureCol, and write result to the predictionCol.
Perform a prediction on featureCol, and write result to the predictionCol.
learning rate for the optimizer in the DLEstimator.
learning rate for the optimizer in the DLEstimator. Default: 0.001
learning rate decay for each iteration.
learning rate decay for each iteration. Default: 0
Number of max Epoch for the training, an epoch refers to a traverse over the training data Default: 50
Number of max Epoch for the training, an epoch refers to a traverse over the training data Default: 50
trainned BigDL models to use in prediction.
optimization method to be used.
optimization method to be used. BigDL supports many optimization methods like Adam, SGD and LBFGS. Refer to package com.intel.analytics.bigdl.optim for all the options. Default: SGD
Validate if feature and label columns are of supported data types.
Validate if feature and label columns are of supported data types. Default: 0
DLModel helps embed a BigDL model into a Spark Transformer, thus Spark users can conveniently merge BigDL into Spark ML pipeline. DLModel supports feature data in the format of Array[Double], Array[Float], org.apache.spark.mllib.linalg.{Vector, VectorUDT}, org.apache.spark.ml.linalg.{Vector, VectorUDT}, Double and Float. Internally DLModel use features column as storage of the feature data, and create Tensors according to the constructor parameter featureSize.
DLModel is compatible with both spark 1.5-plus and 2.0 by extending ML Transformer.