Core Spark functionality.
Core Spark functionality.org.apache.spark.SparkContext serves as the main entry point toSpark, whileorg.apache.spark.rdd.RDD is the data type representing a distributed collection,and provides most parallel operations.
In addition,org.apache.spark.rdd.PairRDDFunctions contains operations available only on RDDsof key-value pairs, such asgroupByKey andjoin;org.apache.spark.rdd.DoubleRDDFunctionscontains operations available only on RDDs of Doubles; andorg.apache.spark.rdd.SequenceFileRDDFunctions contains operations available on RDDs that canbe saved as SequenceFiles. These operations are automatically available on any RDD of the righttype (e.g. RDD[(Int, Int)] through implicit conversions.
Java programmers should reference theorg.apache.spark.api.java packagefor Spark programming APIs in Java.
Classes and methods marked withExperimental are user-facing features which have not been officially adopted by theSpark project. These are subject to change or removal in minor releases.
Classes and methods marked withDeveloper API are intended for advanced users want to extend Spark through lowerlevel interfaces. These are subject to changes or removal in minor releases.
Spark Java programming APIs.
Spark Java programming APIs.
Set of interfaces to represent functions in Spark's Java API.
Set of interfaces to represent functions in Spark's Java API. Users create implementations ofthese interfaces to pass functions to various Java API methods for Spark. Please visit Spark'sJava programming guide for more details.
Aggregate the elements of each partition, and then the results for all the partitions, usinggiven combine functions and a neutral "zero value".
Aggregate the elements of each partition, and then the results for all the partitions, usinggiven combine functions and a neutral "zero value". This function can return a different resulttype, U, than the type of this RDD, T. Thus, we need one operation for merging a T into an Uand one operation for merging two U's, as in scala.IterableOnce. Both of these functions areallowed to modify and return their first argument instead of creating a new U to avoid memoryallocation.
Persist this RDD with the default storage level (MEMORY_ONLY).
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs ofelements (a, b) where a is inthis and b is inother.
Return the Cartesian product of this RDD and another one, that is, the RDD of all pairs ofelements (a, b) where a is inthis and b is inother.
Mark this RDD for checkpointing.
Mark this RDD for checkpointing. It will be saved to a file inside the checkpointdirectory set with SparkContext.setCheckpointDir() and all references to its parentRDDs will be removed. This function must be called before any job has beenexecuted on this RDD. It is strongly recommended that this RDD is persisted inmemory, otherwise saving it on a file will require recomputation.
Return a new RDD that is reduced intonumPartitions partitions.
Return a new RDD that is reduced intonumPartitions partitions.
Return an array that contains all of the elements in this RDD.
Return an array that contains all of the elements in this RDD.
this method should only be used if the resulting array is expected to be small, asall the data is loaded into the driver's memory.
The asynchronous version ofcollect, which returns a future forretrieving an array containing all of the elements in this RDD.
The asynchronous version ofcollect, which returns a future forretrieving an array containing all of the elements in this RDD.
this method should only be used if the resulting array is expected to be small, asall the data is loaded into the driver's memory.
Return an array that contains all of the elements in a specific partition of this RDD.
Return an array that contains all of the elements in a specific partition of this RDD.
Theorg.apache.spark.SparkContext that this RDD was created on.
Theorg.apache.spark.SparkContext that this RDD was created on.
Return the number of elements in the RDD.
Return the number of elements in the RDD.
Approximate version of count() that returns a potentially incomplete resultwithin a timeout, even if not all tasks have finished.
Approximate version of count() that returns a potentially incomplete resultwithin a timeout, even if not all tasks have finished.
maximum time to wait for the job, in milliseconds
Approximate version of count() that returns a potentially incomplete resultwithin a timeout, even if not all tasks have finished.
Approximate version of count() that returns a potentially incomplete resultwithin a timeout, even if not all tasks have finished.
The confidence is the probability that the error bounds of the result willcontain the true value. That is, if countApprox were called repeatedlywith confidence 0.9, we would expect 90% of the results to contain thetrue count. The confidence must be in the range [0,1] or an exception willbe thrown.
maximum time to wait for the job, in milliseconds
the desired statistical confidence in the result
a potentially incomplete result, with error bounds
Return approximate number of distinct elements in the RDD.
Return approximate number of distinct elements in the RDD.
The algorithm used is based on streamlib's implementation of "HyperLogLog in Practice:Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm", availablehere.
Relative accuracy. Smaller values create counters that require more space. It must be greater than 0.000017.
The asynchronous version ofcount, which returns afuture for counting the number of elements in this RDD.
The asynchronous version ofcount, which returns afuture for counting the number of elements in this RDD.
Return the count of each unique value in this RDD as a map of (value, count) pairs.
Return the count of each unique value in this RDD as a map of (value, count) pairs. The finalcombine step happens locally on the master, equivalent to running a single reduce task.
Approximate version of countByValue().
Approximate version of countByValue().
maximum time to wait for the job, in milliseconds
a potentially incomplete result, with error bounds
Approximate version of countByValue().
Approximate version of countByValue().
The confidence is the probability that the error bounds of the result willcontain the true value. That is, if countApprox were called repeatedlywith confidence 0.9, we would expect 90% of the results to contain thetrue count. The confidence must be in the range [0,1] or an exception willbe thrown.
maximum time to wait for the job, in milliseconds
the desired statistical confidence in the result
a potentially incomplete result, with error bounds
Return a new RDD containing the distinct elements in this RDD.
Return a new RDD containing the distinct elements in this RDD.
Return a new RDD containing only the elements that satisfy a predicate.
Return the first element in this RDD.
Return the first element in this RDD.
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
Return a new RDD by first applying a function to all elements of this RDD, and then flattening the results.
Aggregate the elements of each partition, and then the results for all the partitions, using agiven associative function and a neutral "zero value".
Aggregate the elements of each partition, and then the results for all the partitions, using agiven associative function and a neutral "zero value". The functionop(t1, t2) is allowed to modify t1 and return it as its result value to avoid objectallocation; however, it should not modify t2.
This behaves somewhat differently from fold operations implemented for non-distributedcollections in functional languages like Scala. This fold operation may be applied topartitions individually, and then fold those results into the final result, rather thanapply the fold to each element sequentially in some defined ordering. For functionsthat are not commutative, the result may differ from that of a fold applied to anon-distributed collection.
Applies a function f to all elements of this RDD.
Applies a function f to all elements of this RDD.
The asynchronous version of theforeach action, whichapplies a function f to all the elements of this RDD.
The asynchronous version of theforeach action, whichapplies a function f to all the elements of this RDD.
Applies a function f to each partition of this RDD.
Applies a function f to each partition of this RDD.
The asynchronous version of theforeachPartition action, whichapplies a function f to each partition of this RDD.
The asynchronous version of theforeachPartition action, whichapplies a function f to each partition of this RDD.
Gets the name of the file to which this RDD was checkpointed
Gets the name of the file to which this RDD was checkpointed
Return the number of partitions in this RDD.
Return the number of partitions in this RDD.
Get the ResourceProfile specified with this RDD or None if it wasn't specified.
Get the ResourceProfile specified with this RDD or None if it wasn't specified.
the user specified ResourceProfile or null if none was specified
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
Get the RDD's current storage level, or StorageLevel.NONE if none is set.
Return an RDD created by coalescing all elements within each partition into an array.
Return an RDD created by coalescing all elements within each partition into an array.
Return an RDD of grouped elements.
Return an RDD of grouped elements. Each group consists of a key and a sequence of elementsmapping to that key.
Return an RDD of grouped elements.
Return an RDD of grouped elements. Each group consists of a key and a sequence of elementsmapping to that key.
A unique ID for this RDD (within its SparkContext).
A unique ID for this RDD (within its SparkContext).
Return the intersection of this RDD and another one.
Return the intersection of this RDD and another one. The output will not contain any duplicateelements, even if the input RDDs did.
This method performs a shuffle internally.
Return whether this RDD has been checkpointed or not
Return whether this RDD has been checkpointed or not
true if and only if the RDD contains no elements at all. Note that an RDD may be empty even when it has at least 1 partition.
Internal method to this RDD; will read from cache if applicable, or otherwise compute it.
Internal method to this RDD; will read from cache if applicable, or otherwise compute it.This shouldnot be called by users directly, but is available for implementers of customsubclasses of RDD.
Creates tuples of the elements in this RDD by applyingf.
Creates tuples of the elements in this RDD by applyingf.
Return a new RDD by applying a function to all elements of this RDD.
Return a new RDD by applying a function to all elements of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD.
Return a new RDD by applying a function to each partition of this RDD, while tracking the indexof the original partition.
Return a new RDD by applying a function to each partition of this RDD, while tracking the indexof the original partition.
Return a new RDD by applying a function to all elements of this RDD.
Return a new RDD by applying a function to all elements of this RDD.
Return a new RDD by applying a function to all elements of this RDD.
Return a new RDD by applying a function to all elements of this RDD.
Returns the maximum element from this RDD as defined by the specifiedComparator[T].
Returns the maximum element from this RDD as defined by the specifiedComparator[T].
the comparator that defines ordering
the maximum of the RDD
Returns the minimum element from this RDD as defined by the specifiedComparator[T].
Returns the minimum element from this RDD as defined by the specifiedComparator[T].
the comparator that defines ordering
the minimum of the RDD
The partitioner of this RDD.
The partitioner of this RDD.
Set of partitions in this RDD.
Set of partitions in this RDD.
Set this RDD's storage level to persist its values across operations after the first timeit is computed.
Set this RDD's storage level to persist its values across operations after the first timeit is computed. This can only be used to assign a new storage level if the RDD does nothave a storage level set yet..
Return an RDD created by piping elements to a forked external process.
Return an RDD created by piping elements to a forked external process.
Return an RDD created by piping elements to a forked external process.
Return an RDD created by piping elements to a forked external process.
Return an RDD created by piping elements to a forked external process.
Return an RDD created by piping elements to a forked external process.
Return an RDD created by piping elements to a forked external process.
Return an RDD created by piping elements to a forked external process.
Return an RDD created by piping elements to a forked external process.
Return an RDD created by piping elements to a forked external process.
Randomly splits this RDD with the provided weights.
Randomly splits this RDD with the provided weights.
weights for splits, will be normalized if they don't sum to 1
random seed
split RDDs in an array
Randomly splits this RDD with the provided weights.
Randomly splits this RDD with the provided weights.
weights for splits, will be normalized if they don't sum to 1
split RDDs in an array
Reduces the elements of this RDD using the specified commutative and associative binaryoperator.
Reduces the elements of this RDD using the specified commutative and associative binaryoperator.
Return a new RDD that has exactly numPartitions partitions.
Return a new RDD that has exactly numPartitions partitions.
Can increase or decrease the level of parallelism in this RDD. Internally, this usesa shuffle to redistribute data.
If you are decreasing the number of partitions in this RDD, consider usingcoalesce,which can avoid performing a shuffle.
Return a sampled subset of this RDD, with a user-supplied seed.
Return a sampled subset of this RDD, with a user-supplied seed.
can elements be sampled multiple times (replaced when sampled out)
expected size of the sample as a fraction of this RDD's size without replacement: probability that each element is chosen; fraction must be [0, 1] with replacement: expected number of times each element is chosen; fraction must be greater than or equal to 0
seed for the random number generator
This is NOT guaranteed to provide exactly the fraction of the countof the givenRDD.
Return a sampled subset of this RDD with a random seed.
Return a sampled subset of this RDD with a random seed.
can elements be sampled multiple times (replaced when sampled out)
expected size of the sample as a fraction of this RDD's size without replacement: probability that each element is chosen; fraction must be [0, 1] with replacement: expected number of times each element is chosen; fraction must be greater than or equal to 0
This is NOT guaranteed to provide exactly the fraction of the countof the givenRDD.
Save this RDD as a SequenceFile of serialized objects.
Save this RDD as a SequenceFile of serialized objects.
Save this RDD as a compressed text file, using string representations of elements.
Save this RDD as a compressed text file, using string representations of elements.
Save this RDD as a text file, using string representations of elements.
Save this RDD as a text file, using string representations of elements.
Assign a name to this RDD
Return this RDD sorted by the given key function.
Return an RDD with the elements fromthis that are not inother.
Return an RDD with the elements fromthis that are not inother.
Return an RDD with the elements fromthis that are not inother.
Return an RDD with the elements fromthis that are not inother.
Usesthis partitioner/partition size, because even ifother is huge, the resultingRDD will be less than or equal to us.
Take the first num elements of the RDD.
Take the first num elements of the RDD. This currently scans the partitions *one by one*, soit will be slow if a lot of partitions are required. In that case, use collect() to get thewhole RDD instead.
this method should only be used if the resulting array is expected to be small, asall the data is loaded into the driver's memory.
The asynchronous version of thetake action, which returns afuture for retrieving the firstnum elements of this RDD.
The asynchronous version of thetake action, which returns afuture for retrieving the firstnum elements of this RDD.
this method should only be used if the resulting array is expected to be small, asall the data is loaded into the driver's memory.
Returns the first k (smallest) elements from this RDD using thenatural ordering for T while maintain the order.
Returns the first k (smallest) elements from this RDD using thenatural ordering for T while maintain the order.
k, the number of top elements to return
an array of top elements
this method should only be used if the resulting array is expected to be small, asall the data is loaded into the driver's memory.
Returns the first k (smallest) elements from this RDD as defined bythe specified Comparator[T] and maintains the order.
Returns the first k (smallest) elements from this RDD as defined bythe specified Comparator[T] and maintains the order.
k, the number of elements to return
the comparator that defines the order
an array of top elements
this method should only be used if the resulting array is expected to be small, asall the data is loaded into the driver's memory.
A description of this RDD and its recursive dependencies for debugging.
A description of this RDD and its recursive dependencies for debugging.
Return an iterator that contains all of the elements in this RDD.
Return an iterator that contains all of the elements in this RDD.
The iterator will consume as much memory as the largest partition in this RDD.
Returns the top k (largest) elements from this RDD using thenatural ordering for T and maintains the order.
Returns the top k (largest) elements from this RDD using thenatural ordering for T and maintains the order.
k, the number of top elements to return
an array of top elements
this method should only be used if the resulting array is expected to be small, asall the data is loaded into the driver's memory.
Returns the top k (largest) elements from this RDD as defined bythe specified Comparator[T] and maintains the order.
Returns the top k (largest) elements from this RDD as defined bythe specified Comparator[T] and maintains the order.
k, the number of top elements to return
the comparator that defines the order
an array of top elements
this method should only be used if the resulting array is expected to be small, asall the data is loaded into the driver's memory.
org.apache.spark.api.java.JavaRDDLike.treeAggregate with a parameter to do thefinal aggregation on the executor.
org.apache.spark.api.java.JavaRDDLike.treeAggregate with a parameter to do thefinal aggregation on the executor.
org.apache.spark.api.java.JavaRDDLike.treeAggregate with suggested depth 2.
org.apache.spark.api.java.JavaRDDLike.treeAggregate with suggested depth 2.
Aggregates the elements of this RDD in a multi-level tree pattern.
Aggregates the elements of this RDD in a multi-level tree pattern.
suggested depth of the tree
org.apache.spark.api.java.JavaRDDLike.treeReduce with suggested depth 2.
org.apache.spark.api.java.JavaRDDLike.treeReduce with suggested depth 2.
Reduces the elements of this RDD in a multi-level tree pattern.
Reduces the elements of this RDD in a multi-level tree pattern.
suggested depth of the tree
Return the union of this RDD and another one.
Return the union of this RDD and another one. Any identical elements will appear multipletimes (use.distinct() to eliminate them).
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
Whether to block until all blocks are deleted.
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.
Mark the RDD as non-persistent, and remove all blocks for it from memory and disk.This method blocks until all blocks are deleted.
Specify a ResourceProfile to use when calculating this RDD.
Specify a ResourceProfile to use when calculating this RDD. This is only supported oncertain cluster managers and currently requires dynamic allocation to be enabled.It will result in new executors with the resources specified being acquired tocalculate the RDD.
Zips this RDD with another one, returning key-value pairs with the first element in each RDD,second element in each RDD, etc.
Zips this RDD with another one, returning key-value pairs with the first element in each RDD,second element in each RDD, etc. Assumes that the two RDDs have the *same number ofpartitions* and the *same number of elements in each partition* (e.g. one was made througha map on the other).
Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD byapplying a function to the zipped partitions.
Zip this RDD's partitions with one (or more) RDD(s) and return a new RDD byapplying a function to the zipped partitions. Assumes that all the RDDs have the*same number of partitions*, but does *not* require them to have the same numberof elements in each partition.
Zips this RDD with its element indices.
Zips this RDD with its element indices. The ordering is first based on the partition indexand then the ordering of items within each partition. So the first item in the firstpartition gets index 0, and the last item in the last partition receives the largest index.This is similar to Scala's zipWithIndex but it uses Long instead of Int as the index type.This method needs to trigger a spark job when this RDD contains more than one partitions.
Zips this RDD with generated unique Long ids.
Zips this RDD with generated unique Long ids. Items in the kth partition will get ids k, n+k,2*n+k, ..., where n is the number of partitions. So there may exist gaps, but this methodwon't trigger a spark job, which is different fromorg.apache.spark.rdd.RDD#zipWithIndex.
(Since version 9)