bigdl.transform.vision package

Submodules

bigdl.transform.vision.image module

class bigdl.transform.vision.image.AspectScale(min_size, scale_multiple_of=1, max_size=1000, resize_mode=1, use_scale_factor=True, min_scale=-1.0, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Resize the image, keep the aspect ratio. scale according to the short edge :param min_size scale size, apply to short edge :param scale_multiple_of make the scaled size multiple of some value :param max_size max size after scale :param resize_mode if resizeMode = -1, random select a mode from (Imgproc.INTER_LINEAR, Imgproc.INTER_CUBIC, Imgproc.INTER_AREA, Imgproc.INTER_NEAREST, Imgproc.INTER_LANCZOS4) :param use_scale_factor if true, scale factor fx and fy is used, fx = fy = 0 :aram min_scale control the minimum scale up for image

class bigdl.transform.vision.image.Brightness(delta_low, delta_high, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

adjust the image brightness :param deltaLow brightness parameter: low bound :param deltaHigh brightness parameter: high bound

class bigdl.transform.vision.image.BytesToMat(byte_key='bytes', bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Transform byte array(original image file in byte) to OpenCVMat :param byte_key key that maps byte array

class bigdl.transform.vision.image.CenterCrop(crop_width, crop_height, is_clip=True, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Crop a cropWidth x cropHeight patch from center of image. The patch size should be less than the image size. :param crop_width width after crop :param crop_height height after crop :param is_clip clip cropping box boundary

class bigdl.transform.vision.image.ChannelNormalize(mean_r, mean_g, mean_b, std_r=1.0, std_g=1.0, std_b=1.0, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

image channel normalize :param mean_r mean value in R channel :param mean_g mean value in G channel :param meanB_b mean value in B channel :param std_r std value in R channel :param std_g std value in G channel :param std_b std value in B channel

class bigdl.transform.vision.image.ChannelOrder(bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

random change the channel of an image

class bigdl.transform.vision.image.ColorJitter(brightness_prob=0.5, brightness_delta=32.0, contrast_prob=0.5, contrast_lower=0.5, contrast_upper=1.5, hue_prob=0.5, hue_delta=18.0, saturation_prob=0.5, saturation_lower=0.5, saturation_upper=1.5, random_order_prob=0.0, shuffle=False, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Random adjust brightness, contrast, hue, saturation :param brightness_prob probability to adjust brightness :param brightness_delta brightness parameter :param contrast_prob probability to adjust contrast :param contrast_lower contrast lower parameter :param contrast_upper contrast upper parameter :param hue_prob probability to adjust hue :param hue_delta hue parameter :param saturation_prob probability to adjust saturation :param saturation_lower saturation lower parameter :param saturation_upper saturation upper parameter :param random_order_prob random order for different operation :param shuffle shuffle the transformers

class bigdl.transform.vision.image.Contrast(delta_low, delta_high, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Adjust the image contrast :param delta_low contrast parameter low bound :param delta_high contrast parameter high bound

class bigdl.transform.vision.image.DetectionCrop(roi_key, normalized=True, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Crop from object detections, each image should has a tensor detection, which is stored in ImageFeature :param roi_key key that map a tensor detection :param normalized whether is detection is normalized, i.e. in range [0, 1]

class bigdl.transform.vision.image.DistributedImageFrame(image_rdd=None, label_rdd=None, jvalue=None, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.ImageFrame

DistributedImageFrame wraps an RDD of ImageFeature

get_image(float_key='floats', to_chw=True)[source]

get image rdd from ImageFrame

get_label()[source]

get label rdd from ImageFrame

get_predict(key='predict')[source]

get prediction rdd from ImageFrame

get_sample(key='sample')[source]

get sample from ImageFrame

get_uri(key='uri')[source]

get uri from imageframe

random_split(weights)[source]

Random split imageframes according to weights :param weights: weights for each ImageFrame :return:

class bigdl.transform.vision.image.Expand(means_r=123, means_g=117, means_b=104, min_expand_ratio=1.0, max_expand_ratio=4.0, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

expand image, fill the blank part with the meanR, meanG, meanB

:param means_r means in R channel :param means_g means in G channel :param means_b means in B channel :param min_expand_ratio min expand ratio :param max_expand_ratio max expand ratio

class bigdl.transform.vision.image.FeatureTransformer(bigdl_type='float', *args)[source]

Bases: bigdl.util.common.JavaValue

FeatureTransformer is a transformer that transform ImageFeature

transform(image_feature, bigdl_type='float')[source]

transform ImageFeature

class bigdl.transform.vision.image.Filler(start_x, start_y, end_x, end_y, value=255, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Fill part of image with certain pixel value :param start_x start x ratio :param start_y start y ratio :param end_x end x ratio :param end_y end y ratio :param value filling value

class bigdl.transform.vision.image.FixExpand(expand_height, expand_width, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Expand image with given expandHeight and expandWidth, put the original image to the center of expanded image :param expand_height height expand to :param expand_width width expand to

class bigdl.transform.vision.image.FixedCrop(x1, y1, x2, y2, normalized=True, is_clip=True, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Crop a fixed area of image

:param x1 start in width :param y1 start in height :param x2 end in width :param y2 end in height :param normalized whether args are normalized, i.e. in range [0, 1] :param is_clip whether to clip the roi to image boundaries

class bigdl.transform.vision.image.HFlip(bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Flip the image horizontally

class bigdl.transform.vision.image.Hue(delta_low, delta_high, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Adjust image hue :param delta_low hue parameter: low bound :param delta_high hue parameter: high bound

class bigdl.transform.vision.image.ImageFeature(image=None, label=None, path=None, bigdl_type='float')[source]

Bases: bigdl.util.common.JavaValue

Each ImageFeature keeps information about single image, it can include various status of an image, e.g. original bytes read from image file, an opencv mat, pixels in float array, image label, meta data and so on. it uses HashMap to store all these data, the key is string that identify the corresponding value

get_image(float_key='floats', to_chw=True)[source]

get image as ndarray from ImageFeature

get_label()[source]

get label as ndarray from ImageFeature

keys()[source]

get key set from ImageFeature

class bigdl.transform.vision.image.ImageFrame(jvalue, bigdl_type='float')[source]

Bases: bigdl.util.common.JavaValue

ImageFrame wraps a set of ImageFeature

get_image(float_key='floats', to_chw=True)[source]

get image from ImageFrame

get_label()[source]

get label from ImageFrame

get_predict(key='predict')[source]

get prediction from ImageFrame

get_sample()[source]

get sample from ImageFrame

get_uri()[source]

get uri from imageframe

is_distributed()[source]

whether this is a DistributedImageFrame

is_local()[source]

whether this is a LocalImageFrame

random_split(weights)[source]

Random split imageframes according to weights :param weights: weights for each ImageFrame :return:

classmethod read(path, sc=None, min_partitions=1, bigdl_type='float')[source]

Read images as Image Frame if sc is defined, Read image as DistributedImageFrame from local file system or HDFS if sc is null, Read image as LocalImageFrame from local file system :param path path to read images if sc is defined, path can be local or HDFS. Wildcard character are supported. if sc is null, path is local directory/image file/image file with wildcard character :param sc SparkContext :param min_partitions A suggestion value of the minimal splitting number for input data. :return ImageFrame

classmethod read_parquet(path, sc, bigdl_type='float')[source]

Read parquet file as DistributedImageFrame

set_label(label, bigdl_type='float')[source]

set label for imageframe

transform(transformer, bigdl_type='float')[source]

transformImageFrame

classmethod write_parquet(path, output, sc, partition_num=1, bigdl_type='float')[source]

write ImageFrame as parquet file

class bigdl.transform.vision.image.ImageFrameToSample(input_keys=['imageTensor'], target_keys=None, sample_key='sample', bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

transform imageframe to samples :param input_keys keys that maps inputs (each input should be a tensor) :param target_keys keys that maps targets (each target should be a tensor) :param sample_key key to store sample

class bigdl.transform.vision.image.LocalImageFrame(image_list=None, label_list=None, jvalue=None, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.ImageFrame

LocalImageFrame wraps a list of ImageFeature

get_image(float_key='floats', to_chw=True)[source]

get image list from ImageFrame

get_label()[source]

get label list from ImageFrame

get_predict(key='predict')[source]

get prediction list from ImageFrame

get_sample(key='sample')[source]

get sample from ImageFrame

get_uri(key='uri')[source]

get uri from imageframe

random_split(weights)[source]

Random split imageframes according to weights :param weights: weights for each ImageFrame :return:

class bigdl.transform.vision.image.MatToFloats(valid_height=300, valid_width=300, valid_channel=300, out_key='floats', share_buffer=True, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Transform OpenCVMat to float array, note that in this transformer, the mat is released :param valid_height valid height in case the mat is invalid :param valid_width valid width in case the mat is invalid :param valid_channel valid channel in case the mat is invalid :param out_key key to store float array :param share_buffer share buffer of output

class bigdl.transform.vision.image.MatToTensor(to_rgb=False, tensor_key='imageTensor', bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

transform opencv mat to tensor :param to_rgb BGR to RGB (default is BGR) :param tensor_key key to store transformed tensor

class bigdl.transform.vision.image.Pipeline(transformers, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Pipeline of FeatureTransformer

class bigdl.transform.vision.image.PixelBytesToMat(byte_key='bytes', bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Transform byte array(pixels in byte) to OpenCVMat :param byte_key key that maps byte array

class bigdl.transform.vision.image.PixelNormalize(means, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Pixel level normalizer, data(i) = data(i) - mean(i)

:param means pixel level mean, following H * W * C order

class bigdl.transform.vision.image.RandomAspectScale(scales, scale_multiple_of=1, max_size=1000, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

resize the image by randomly choosing a scale :param scales array of scale options that for random choice :param scaleMultipleOf Resize test images so that its width and height are multiples of :param maxSize Max pixel size of the longest side of a scaled input image

class bigdl.transform.vision.image.RandomCrop(crop_width, crop_height, is_clip=True, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Random crop a cropWidth x cropHeight patch from an image. The patch size should be less than the image size.

:param crop_width width after crop :param crop_height height after crop :param is_clip whether to clip the roi to image boundaries

class bigdl.transform.vision.image.RandomSampler[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Random sample a bounding box given some constraints and crop the image This is used in SSD training augmentation

class bigdl.transform.vision.image.RandomTransformer(transformer, prob, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

It is a wrapper for transformers to control the transform probability :param transformer transformer to apply randomness :param prob max prob

class bigdl.transform.vision.image.Resize(resize_h, resize_w, resize_mode=1, use_scale_factor=True, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Resize image :param resize_h height after resize :param resize_w width after resize :param resize_mode if resizeMode = -1, random select a mode from (Imgproc.INTER_LINEAR, Imgproc.INTER_CUBIC, Imgproc.INTER_AREA, Imgproc.INTER_NEAREST, Imgproc.INTER_LANCZOS4) :param use_scale_factor if true, scale factor fx and fy is used, fx = fy = 0 note that the result of the following are different Imgproc.resize(mat, mat, new Size(resizeWH, resizeWH), 0, 0, Imgproc.INTER_LINEAR) Imgproc.resize(mat, mat, new Size(resizeWH, resizeWH))

class bigdl.transform.vision.image.RoiHFlip(normalized=True, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

horizontally flip the roi :param normalized whether the roi is normalized, i.e. in range [0, 1]

class bigdl.transform.vision.image.RoiNormalize(bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Normalize Roi to [0, 1]

class bigdl.transform.vision.image.RoiProject(need_meet_center_constraint, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Project gt boxes onto the coordinate system defined by image boundary :param need_meet_center_constraint whether need to meet center constraint, i.e., the center of gt box need be within image boundary

class bigdl.transform.vision.image.RoiResize(normalized=True, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

resize the roi according to scale :param normalized whether the roi is normalized, i.e. in range [0, 1]

class bigdl.transform.vision.image.Saturation(delta_low, delta_high, bigdl_type='float')[source]

Bases: bigdl.transform.vision.image.FeatureTransformer

Adjust image saturation

class bigdl.transform.vision.image.SeqFileFolder(jvalue, bigdl_type, *args)[source]

Bases: bigdl.util.common.JavaValue

classmethod files_to_image_frame(url, sc, class_num, partition_num=-1, bigdl_type='float')[source]

Extract hadoop sequence files from an HDFS path as ImageFrame :param url: sequence files folder path :param sc: spark context :param class_num: class number of data :param partition_num: partition number, default: Engine.nodeNumber() * Engine.coreNumber()

Module contents