Bucketizer#
- classpyspark.ml.feature.Bucketizer(*,splits=None,inputCol=None,outputCol=None,handleInvalid='error',splitsArray=None,inputCols=None,outputCols=None)[source]#
Maps a column of continuous features to a column of feature buckets. Since 3.0.0,
Bucketizercan map multiple columns at once by setting theinputColsparameter. Note that when both theinputColandinputColsparametersare set, an Exception will be thrown. Thesplitsparameter is only used for singlecolumn usage, andsplitsArrayis for multiple columns.New in version 1.4.0.
Examples
>>>values=[(0.1,0.0),(0.4,1.0),(1.2,1.3),(1.5,float("nan")),...(float("nan"),1.0),(float("nan"),0.0)]>>>df=spark.createDataFrame(values,["values1","values2"])>>>bucketizer=Bucketizer()>>>bucketizer.setSplits([-float("inf"),0.5,1.4,float("inf")])Bucketizer...>>>bucketizer.setInputCol("values1")Bucketizer...>>>bucketizer.setOutputCol("buckets")Bucketizer...>>>bucketed=bucketizer.setHandleInvalid("keep").transform(df).collect()>>>bucketed=bucketizer.setHandleInvalid("keep").transform(df.select("values1"))>>>bucketed.show(truncate=False)+-------+-------+|values1|buckets|+-------+-------+|0.1 |0.0 ||0.4 |0.0 ||1.2 |1.0 ||1.5 |2.0 ||NaN |3.0 ||NaN |3.0 |+-------+-------+...>>>bucketizer.setParams(outputCol="b").transform(df).head().b0.0>>>bucketizerPath=temp_path+"/bucketizer">>>bucketizer.save(bucketizerPath)>>>loadedBucketizer=Bucketizer.load(bucketizerPath)>>>loadedBucketizer.getSplits()==bucketizer.getSplits()True>>>loadedBucketizer.transform(df).take(1)==bucketizer.transform(df).take(1)True>>>bucketed=bucketizer.setHandleInvalid("skip").transform(df).collect()>>>len(bucketed)4>>>bucketizer2=Bucketizer(splitsArray=...[[-float("inf"),0.5,1.4,float("inf")],[-float("inf"),0.5,float("inf")]],...inputCols=["values1","values2"],outputCols=["buckets1","buckets2"])>>>bucketed2=bucketizer2.setHandleInvalid("keep").transform(df)>>>bucketed2.show(truncate=False)+-------+-------+--------+--------+|values1|values2|buckets1|buckets2|+-------+-------+--------+--------+|0.1 |0.0 |0.0 |0.0 ||0.4 |1.0 |0.0 |1.0 ||1.2 |1.3 |1.0 |1.0 ||1.5 |NaN |2.0 |2.0 ||NaN |1.0 |3.0 |1.0 ||NaN |0.0 |3.0 |0.0 |+-------+-------+--------+--------+...
Methods
clear(param)Clears a param from the param map if it has been explicitly set.
copy([extra])Creates a copy of this instance with the same uid and some extra params.
explainParam(param)Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string.
Returns the documentation of all params with their optionally default values and user-supplied values.
extractParamMap([extra])Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra.
Gets the value of handleInvalid or its default value.
Gets the value of inputCol or its default value.
Gets the value of inputCols or its default value.
getOrDefault(param)Gets the value of a param in the user-supplied param map or its default value.
Gets the value of outputCol or its default value.
Gets the value of outputCols or its default value.
getParam(paramName)Gets a param by its name.
Gets the value of threshold or its default value.
Gets the array of split points or its default value.
hasDefault(param)Checks whether a param has a default value.
hasParam(paramName)Tests whether this instance contains a param with a given (string) name.
isDefined(param)Checks whether a param is explicitly set by user or has a default value.
isSet(param)Checks whether a param is explicitly set by user.
load(path)Reads an ML instance from the input path, a shortcut ofread().load(path).
read()Returns an MLReader instance for this class.
save(path)Save this ML instance to the given path, a shortcut of 'write().save(path)'.
set(param, value)Sets a parameter in the embedded param map.
setHandleInvalid(value)Sets the value of
handleInvalid.setInputCol(value)Sets the value of
inputCol.setInputCols(value)Sets the value of
inputCols.setOutputCol(value)Sets the value of
outputCol.setOutputCols(value)Sets the value of
outputCols.setParams(self, \*[, splits, inputCol, ...])Sets params for this Bucketizer.
setSplits(value)Sets the value of
splits.setSplitsArray(value)Sets the value of
splitsArray.transform(dataset[, params])Transforms the input dataset with optional parameters.
write()Returns an MLWriter instance for this ML instance.
Attributes
Returns all params ordered by name.
Methods Documentation
- clear(param)#
Clears a param from the param map if it has been explicitly set.
- copy(extra=None)#
Creates a copy of this instance with the same uid and someextra params. This implementation first calls Params.copy andthen make a copy of the companion Java pipeline component withextra params. So both the Python wrapper and the Java pipelinecomponent get copied.
- Parameters
- extradict, optional
Extra parameters to copy to the new instance
- Returns
JavaParamsCopy of this instance
- explainParam(param)#
Explains a single param and returns its name, doc, and optionaldefault value and user-supplied value in a string.
- explainParams()#
Returns the documentation of all params with their optionallydefault values and user-supplied values.
- extractParamMap(extra=None)#
Extracts the embedded default param values and user-suppliedvalues, and then merges them with extra values from input intoa flat param map, where the latter value is used if there existconflicts, i.e., with ordering: default param values <user-supplied values < extra.
- Parameters
- extradict, optional
extra param values
- Returns
- dict
merged param map
- getHandleInvalid()#
Gets the value of handleInvalid or its default value.
- getInputCol()#
Gets the value of inputCol or its default value.
- getInputCols()#
Gets the value of inputCols or its default value.
- getOrDefault(param)#
Gets the value of a param in the user-supplied param map or itsdefault value. Raises an error if neither is set.
- getOutputCol()#
Gets the value of outputCol or its default value.
- getOutputCols()#
Gets the value of outputCols or its default value.
- getParam(paramName)#
Gets a param by its name.
- hasDefault(param)#
Checks whether a param has a default value.
- hasParam(paramName)#
Tests whether this instance contains a param with a given(string) name.
- isDefined(param)#
Checks whether a param is explicitly set by user or hasa default value.
- isSet(param)#
Checks whether a param is explicitly set by user.
- classmethodload(path)#
Reads an ML instance from the input path, a shortcut ofread().load(path).
- classmethodread()#
Returns an MLReader instance for this class.
- save(path)#
Save this ML instance to the given path, a shortcut of ‘write().save(path)’.
- set(param,value)#
Sets a parameter in the embedded param map.
- setHandleInvalid(value)[source]#
Sets the value of
handleInvalid.
- setOutputCols(value)[source]#
Sets the value of
outputCols.New in version 3.0.0.
- setParams(self,\*,splits=None,inputCol=None,outputCol=None,handleInvalid="error",splitsArray=None,inputCols=None,outputCols=None)[source]#
Sets params for this Bucketizer.
New in version 1.4.0.
- setSplitsArray(value)[source]#
Sets the value of
splitsArray.New in version 3.0.0.
- transform(dataset,params=None)#
Transforms the input dataset with optional parameters.
New in version 1.3.0.
- Parameters
- dataset
pyspark.sql.DataFrame input dataset
- paramsdict, optional
an optional param map that overrides embedded params.
- dataset
- Returns
pyspark.sql.DataFrametransformed dataset
- write()#
Returns an MLWriter instance for this ML instance.
Attributes Documentation
- handleInvalid=Param(parent='undefined',name='handleInvalid',doc="howtohandleinvalidentriescontainingNaNvalues.Valuesoutsidethesplitswillalwaysbetreatedaserrors.Optionsare'skip'(filteroutrowswithinvalidvalues),'error'(throwanerror),or'keep'(keepinvalidvaluesinaspecialadditionalbucket).Notethatinthemultiplecolumncase,theinvalidhandlingisappliedtoallcolumns.Thatsaidfor'error'itwillthrowanerrorifanyinvalidsarefoundinanycolumn,for'skip'itwillskiprowswithanyinvalidsinanycolumns,etc.")#
- inputCol=Param(parent='undefined',name='inputCol',doc='inputcolumnname.')#
- inputCols=Param(parent='undefined',name='inputCols',doc='inputcolumnnames.')#
- outputCol=Param(parent='undefined',name='outputCol',doc='outputcolumnname.')#
- outputCols=Param(parent='undefined',name='outputCols',doc='outputcolumnnames.')#
- params#
Returns all params ordered by name. The default implementationuses
dir()to get all attributes of typeParam.
- splits=Param(parent='undefined',name='splits',doc='Splitpointsformappingcontinuousfeaturesintobuckets.Withn+1splits,therearenbuckets.Abucketdefinedbysplitsx,yholdsvaluesintherange[x,y)exceptthelastbucket,whichalsoincludesy.Thesplitsshouldbeoflength>=3andstrictlyincreasing.Valuesat-inf,infmustbeexplicitlyprovidedtocoverallDoublevalues;otherwise,valuesoutsidethesplitsspecifiedwillbetreatedaserrors.')#
- splitsArray=Param(parent='undefined',name='splitsArray',doc='Thearrayofsplitpointsformappingcontinuousfeaturesintobucketsformultiplecolumns.Foreachinputcolumn,withn+1splits,therearenbuckets.Abucketdefinedbysplitsx,yholdsvaluesintherange[x,y)exceptthelastbucket,whichalsoincludesy.Thesplitsshouldbeoflength>=3andstrictlyincreasing.Valuesat-inf,infmustbeexplicitlyprovidedtocoverallDoublevalues;otherwise,valuesoutsidethesplitsspecifiedwillbetreatedaserrors.')#
- uid#
A unique id for the object.