pyspark.RDD.map#
- RDD.map(f,preservesPartitioning=False)[source]#
Return a new RDD by applying a function to each element of this RDD.
New in version 0.7.0.
- Parameters
- ffunction
a function to run on each element of the RDD
- preservesPartitioningbool, optional, default False
indicates whether the input function preserves the partitioner,which should be False unless this is a pair RDD and the inputfunction doesn’t modify the keys
- Returns
Examples
>>>rdd=sc.parallelize(["b","a","c"])>>>sorted(rdd.map(lambdax:(x,1)).collect())[('a', 1), ('b', 1), ('c', 1)]