bigdl.util package

Submodules

bigdl.util.common module

class bigdl.util.common.Configuration[source]

Bases: object

static add_extra_jars(jars)[source]

Add extra jars to classpath :param jars: a string or a list of strings as jar paths

static add_extra_python_modules(packages)[source]

Add extra python modules to sys.path :param packages: a string or a list of strings as python package paths

static get_bigdl_jars()[source]
class bigdl.util.common.EvaluatedResult(result, total_num, method)[source]

A testing result used to benchmark the model quality.

class bigdl.util.common.JActivity(value)[source]

Bases: object

class bigdl.util.common.JTensor(storage, shape, bigdl_type='float', indices=None)[source]

Bases: object

A wrapper to easy our work when need to pass or return Tensor to/from Scala.

>>> import numpy as np
>>> from bigdl.util.common import JTensor
>>> np.random.seed(123)
>>>
classmethod from_ndarray(a_ndarray, bigdl_type='float')[source]

Convert a ndarray to a DenseTensor which would be used in Java side.

>>> import numpy as np
>>> from bigdl.util.common import JTensor
>>> from bigdl.util.common import callBigDlFunc
>>> np.random.seed(123)
>>> data = np.random.uniform(0, 1, (2, 3)).astype("float32")
>>> result = JTensor.from_ndarray(data)
>>> print(result)
JTensor: storage: [[ 0.69646919  0.28613934  0.22685145]
[ 0.55131477  0.71946895  0.42310646]], shape: [2 3], float
>>> result
JTensor: storage: [[ 0.69646919  0.28613934  0.22685145]
[ 0.55131477  0.71946895  0.42310646]], shape: [2 3], float
>>> data_back = result.to_ndarray()
>>> (data == data_back).all()
True
>>> tensor1 = callBigDlFunc("float", "testTensor", JTensor.from_ndarray(data))  # noqa
>>> array_from_tensor = tensor1.to_ndarray()
>>> (array_from_tensor == data).all()
True
classmethod sparse(a_ndarray, i_ndarray, shape, bigdl_type='float')[source]

Convert a three ndarray to SparseTensor which would be used in Java side. For example: a_ndarray = [1, 3, 2, 4] i_ndarray = [[0, 0, 1, 2], [0, 3, 2, 1]] shape = [3, 4] Present a dense tensor [[ 1, 0, 0, 3], [ 0, 0, 2, 0], [ 0, 4, 0, 0]]

:param a_ndarray non-zero elements in this SparseTensor :param i_ndarray zero-based indices for non-zero element i_ndarray’s shape should be (shape.size, a_ndarray.size) And the i-th non-zero elements indices is i_ndarray[:, 1] :param shape shape as a DenseTensor.

>>> import numpy as np
>>> from bigdl.util.common import JTensor
>>> from bigdl.util.common import callBigDlFunc
>>> np.random.seed(123)
>>> data = np.arange(1, 7).astype("float32")
>>> indices = np.arange(1, 7)
>>> shape = np.array([10])
>>> result = JTensor.sparse(data, indices, shape)
>>> result
JTensor: storage: [ 1.  2.  3.  4.  5.  6.], shape: [10] ,indices [1 2 3 4 5 6], float
>>> tensor1 = callBigDlFunc("float", "testTensor", result)  # noqa
>>> array_from_tensor = tensor1.to_ndarray()
>>> expected_ndarray = np.array([0, 1, 2, 3, 4, 5, 6, 0, 0, 0])
>>> (array_from_tensor == expected_ndarray).all()
True
to_ndarray()[source]

Transfer JTensor to ndarray. As SparseTensor may generate an very big ndarray, so we don’t support this function for SparseTensor. :return: a ndarray

class bigdl.util.common.JavaCreator(bigdl_type)[source]

Bases: bigdl.util.common.SingletonMixin

classmethod get_creator_class()[source]
classmethod set_creator_class(cclass)[source]
class bigdl.util.common.JavaValue(jvalue, bigdl_type, *args)[source]

Bases: object

jvm_class_constructor()[source]
class bigdl.util.common.RNG(bigdl_type='float')[source]

generate tensor data with seed

set_seed(seed)[source]
uniform(a, b, size)[source]
class bigdl.util.common.Sample(features, labels, bigdl_type='float')[source]

Bases: object

classmethod from_jtensor(features, labels, bigdl_type='float')[source]

Convert a sequence of JTensor to Sample, which would be used in Java side. :param features: an JTensor or a list of JTensor :param labels: an JTensor or a list of JTensor or a scalar :param bigdl_type: “double” or “float”

>>> import numpy as np
>>> data = np.random.uniform(0, 1, (6)).astype("float32")
>>> indices = np.arange(1, 7)
>>> shape = np.array([10])
>>> feature0 = JTensor.sparse(data, indices, shape)
>>> feature1 = JTensor.from_ndarray(np.random.uniform(0, 1, (2, 3)).astype("float32"))
>>> sample = Sample.from_jtensor([feature0, feature1], 1)
classmethod from_ndarray(features, labels, bigdl_type='float')[source]

Convert a ndarray of features and labels to Sample, which would be used in Java side. :param features: an ndarray or a list of ndarrays :param labels: an ndarray or a list of ndarrays or a scalar :param bigdl_type: “double” or “float”

>>> import numpy as np
>>> from bigdl.util.common import callBigDlFunc
>>> from numpy.testing import assert_allclose
>>> np.random.seed(123)
>>> sample = Sample.from_ndarray(np.random.random((2,3)), np.random.random((2,3)))
>>> sample_back = callBigDlFunc("float", "testSample", sample)
>>> assert_allclose(sample.features[0].to_ndarray(), sample_back.features[0].to_ndarray())
>>> assert_allclose(sample.label.to_ndarray(), sample_back.label.to_ndarray())
>>> print(sample)
Sample: features: [JTensor: storage: [[ 0.69646919  0.28613934  0.22685145]
[ 0.55131477  0.71946895  0.42310646]], shape: [2 3], float], labels: [JTensor: storage: [[ 0.98076421  0.68482971  0.48093191]
[ 0.39211753  0.343178    0.72904968]], shape: [2 3], float],
class bigdl.util.common.SingletonMixin[source]

Bases: object

classmethod instance(bigdl_type='float')[source]
bigdl.util.common.callBigDlFunc(bigdl_type, name, *args)[source]

Call API in PythonBigDL

bigdl.util.common.callJavaFunc(sc, func, *args)[source]

Call Java Function

bigdl.util.common.create_spark_conf()[source]
bigdl.util.common.create_tmp_path()[source]
bigdl.util.common.extend_spark_driver_cp(sparkConf, path)[source]
bigdl.util.common.get_activation_by_name(activation_name, activation_id=None)[source]

Convert to a bigdl activation layer given the name of the activation as a string

bigdl.util.common.get_bigdl_conf()[source]
bigdl.util.common.get_dtype(bigdl_type)[source]
bigdl.util.common.get_local_file(a_path)[source]
bigdl.util.common.get_spark_context(conf=None)[source]

Get the current active spark context and create one if no active instance :param conf: combining bigdl configs into spark conf :return: SparkContext

bigdl.util.common.get_spark_sql_context(sc)[source]
bigdl.util.common.init_engine(bigdl_type='float')[source]
bigdl.util.common.is_distributed(path)[source]
bigdl.util.common.redire_spark_logs(bigdl_type='float', log_path='/var/jenkins_home/workspace/BigDL-Doc-Release/BigDL/pyspark/docs/bigdl.log')[source]

Redirect spark logs to the specified path. :param bigdl_type: “double” or “float” :param log_path: the file path to be redirected to; the default file is under the current workspace named bigdl.log.

bigdl.util.common.show_bigdl_info_logs(bigdl_type='float')[source]

Set BigDL log level to INFO. :param bigdl_type: “double” or “float”

bigdl.util.common.text_from_path(path)[source]
bigdl.util.common.to_list(a)[source]
bigdl.util.common.to_sample_rdd(x, y, numSlices=None)[source]

Conver x and y into RDD[Sample] :param x: ndarray and the first dimension should be batch :param y: ndarray and the first dimension should be batch :param numSlices: :return:

bigdl.util.engine module

bigdl.util.engine.check_spark_source_conflict(spark_home, pyspark_path)[source]
bigdl.util.engine.compare_version(version1, version2)[source]

Compare version strings. :param version1; :param version2; :return: 1 if version1 is after version2; -1 if version1 is before version2; 0 if two versions are the same.

bigdl.util.engine.exist_pyspark()[source]
bigdl.util.engine.get_bigdl_classpath()[source]

Get and return the jar path for bigdl if exists.

bigdl.util.engine.is_spark_below_2_2()[source]

Check if spark version is below 2.2

bigdl.util.engine.prepare_env()[source]

bigdl.util.tf_utils module

Module contents