Use the Cloud Storage connector with Apache Spark Stay organized with collections Save and categorize content based on your preferences.
Objectives
Write a simple wordcount Spark job in Java, Scala, orPython, then run the job on a Dataproc cluster.
All Dataproc cluster image versions havethe Spark components needed for this tutorial already installed.Costs
In this document, you use the following billable components of Google Cloud:
- Compute Engine
- Dataproc
- Cloud Storage
To generate a cost estimate based on your projected usage, use thepricing calculator.
Before you begin
Run the steps below to prepare to run the code in this tutorial.
Set up your project. If necessary, set up a project with the Dataproc, Compute Engine,and Cloud Storage APIs enabled and the Google Cloud CLI installed on your local machine.
- Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Dataproc, Compute Engine, and Cloud Storage APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.Create a service account:
- Ensure that you have the Create Service Accounts IAM role (
roles/iam.serviceAccountCreator) and the Project IAM Admin role (roles/resourcemanager.projectIamAdmin).Learn how to grant roles. In the Google Cloud console, go to theCreate service account page.
Go to Create service account- Select your project.
In theService account name field, enter a name. The Google Cloud console fills in theService account ID field based on this name.
In theService account description field, enter a description. For example,
Service account for quickstart.- ClickCreate and continue.
Grant theProject > Owner role to the service account.
To grant the role, find theSelect a role list, then selectProject > Owner.
Note: TheRole field affects which resources the service account can access in your project. You can revoke these roles or grant additional roles later. In production environments, do not grant the Owner, Editor, or Viewer roles. Instead, grant apredefined role orcustom role that meets your needs.- ClickContinue.
ClickDone to finish creating the service account.
Do not close your browser window. You will use it in the next step.
- Ensure that you have the Create Service Accounts IAM role (
Create a service account key:
- In the Google Cloud console, click the email address for the service account that you created.
- ClickKeys.
- ClickAdd key, and then clickCreate new key.
- ClickCreate. A JSON key file is downloaded to your computer.
- ClickClose.
Set the environment variable
GOOGLE_APPLICATION_CREDENTIALSto the path of the JSON file that contains your credentials. This variable applies only to your current shell session, so if you open a new session, set the variable again.Example: Linux or macOS
exportGOOGLE_APPLICATION_CREDENTIALS="
KEY_PATH"Replace
KEY_PATHwith the path of the JSON file that contains your credentials.For example:
exportGOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Example: Windows
For PowerShell:
$env:GOOGLE_APPLICATION_CREDENTIALS="
KEY_PATH"Replace
KEY_PATHwith the path of the JSON file that contains your credentials.For example:
$env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\service-account-file.json"
For command prompt:
setGOOGLE_APPLICATION_CREDENTIALS=
KEY_PATHReplace
KEY_PATHwith the path of the JSON file that contains your credentials.Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Toinitialize the gcloud CLI, run the following command:
gcloudinit
In the Google Cloud console, on the project selector page, select or create a Google Cloud project.
Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.Roles required to select or create a project
- Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
- Create a project: To create a project, you need the Project Creator role (
roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.createpermission.Learn how to grant roles.
Verify that billing is enabled for your Google Cloud project.
Enable the Dataproc, Compute Engine, and Cloud Storage APIs.
Roles required to enable APIs
To enable APIs, you need the Service Usage Admin IAM role (
roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enablepermission.Learn how to grant roles.Create a service account:
- Ensure that you have the Create Service Accounts IAM role (
roles/iam.serviceAccountCreator) and the Project IAM Admin role (roles/resourcemanager.projectIamAdmin).Learn how to grant roles. In the Google Cloud console, go to theCreate service account page.
Go to Create service account- Select your project.
In theService account name field, enter a name. The Google Cloud console fills in theService account ID field based on this name.
In theService account description field, enter a description. For example,
Service account for quickstart.- ClickCreate and continue.
Grant theProject > Owner role to the service account.
To grant the role, find theSelect a role list, then selectProject > Owner.
Note: TheRole field affects which resources the service account can access in your project. You can revoke these roles or grant additional roles later. In production environments, do not grant the Owner, Editor, or Viewer roles. Instead, grant apredefined role orcustom role that meets your needs.- ClickContinue.
ClickDone to finish creating the service account.
Do not close your browser window. You will use it in the next step.
- Ensure that you have the Create Service Accounts IAM role (
Create a service account key:
- In the Google Cloud console, click the email address for the service account that you created.
- ClickKeys.
- ClickAdd key, and then clickCreate new key.
- ClickCreate. A JSON key file is downloaded to your computer.
- ClickClose.
Set the environment variable
GOOGLE_APPLICATION_CREDENTIALSto the path of the JSON file that contains your credentials. This variable applies only to your current shell session, so if you open a new session, set the variable again.Example: Linux or macOS
exportGOOGLE_APPLICATION_CREDENTIALS="
KEY_PATH"Replace
KEY_PATHwith the path of the JSON file that contains your credentials.For example:
exportGOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"
Example: Windows
For PowerShell:
$env:GOOGLE_APPLICATION_CREDENTIALS="
KEY_PATH"Replace
KEY_PATHwith the path of the JSON file that contains your credentials.For example:
$env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\service-account-file.json"
For command prompt:
setGOOGLE_APPLICATION_CREDENTIALS=
KEY_PATHReplace
KEY_PATHwith the path of the JSON file that contains your credentials.Install the Google Cloud CLI.
If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.
Toinitialize the gcloud CLI, run the following command:
gcloudinit
Create a Cloud Storage bucket. You need a Cloud Storage to hold tutorial data. If you do not have oneready to use, create a new bucket in your project.
- In the Google Cloud console, go to the Cloud StorageBuckets page.
- ClickCreate.
- On theCreate a bucket page, enter your bucket information. To go to the next step, clickContinue.
- In theGet started section, do the following:
- Enter a globally unique name that meets thebucket naming requirements.
- To add abucket label, expand theLabels section (), clickadd_boxAdd label, and specify a
keyand avaluefor your label.
- In theChoose where to store your data section, do the following:
- Select aLocation type.
- Choose a location where your bucket's data is permanently stored from theLocation type drop-down menu.
- If you select thedual-region location type, you can also choose to enableturbo replication by using the relevant checkbox.
- To set upcross-bucket replication, selectAdd cross-bucket replication via Storage Transfer Service and follow these steps:
Set up cross-bucket replication
- In theBucket menu, select a bucket.
In theReplication settings section, clickConfigure to configure settings for the replication job.
TheConfigure cross-bucket replication pane appears.
- To filter objects to replicate by object name prefix, enter a prefix that you want to include or exclude objects from, then clickAdd a prefix.
- To set a storage class for the replicated objects, select a storage class from theStorage class menu. If you skip this step, the replicated objects will use the destination bucket's storage class by default.
- ClickDone.
- In theChoose how to store your data section, do the following:
- Select adefault storage class for the bucket orAutoclass for automatic storage class management of your bucket's data.
- To enablehierarchical namespace, in theOptimize storage for data-intensive workloads section, selectEnable hierarchical namespace on this bucket.Note: You cannot enable hierarchical namespace in existing buckets.
- In theChoose how to control access to objects section, select whether or not your bucket enforcespublic access prevention, and select anaccess control method for your bucket's objects.Note: You cannot change thePrevent public access setting if this setting is enforced at anorganization policy.
- In theChoose how to protect object data section, do the following:
- Select any of the options underData protection that you want to set for your bucket.
- To enablesoft delete, click theSoft delete policy (For data recovery) checkbox, and specify the number of days you want to retain objects after deletion.
- To setObject Versioning, click theObject versioning (For version control) checkbox, and specify the maximum number of versions per object and the number of days after which the noncurrent versions expire.
- To enable the retention policy on objects and buckets, click theRetention (For compliance) checkbox, and then do the following:
- To enableObject Retention Lock, click theEnable object retention checkbox.
- To enableBucket Lock, click theSet bucket retention policy checkbox, and choose a unit of time and a length of time for your retention period.
- To choose how your object data will be encrypted, expand theData encryption section (), and select aData encryption method.
- Select any of the options underData protection that you want to set for your bucket.
- In theGet started section, do the following:
- ClickCreate.
Set local environment variables. Set environment variables on yourlocal machine. Set your Google Cloud project-id and the name of theCloud Storage bucket you will use for this tutorial. Also provide thename andregionof an existing or new Dataproc cluster.You can create a cluster to use in this tutorial in the next step.
PROJECT=project-id
BUCKET_NAME=bucket-name
CLUSTER=cluster-name
REGION=cluster-region Example: "us-central1"
Create a Dataproc cluster. Run the command, below, tocreate asingle-nodeDataproc cluster in the specifiedCompute Engine zone.
The above command installs the defaultcluster image version.You can use thegcloud dataproc clusters create ${CLUSTER} \ --project=${PROJECT} \ --region=${REGION} \ --single-node--image-versionflag to select an image version for your cluster. Each image version installsspecific versions of Spark and Scala library components. If youprepare the Spark wordcount jobin Java or Scala, you will reference the Spark and Scala versions installedon your cluster when you prepare the job package.Copy public data to your Cloud Storage bucket. Copy a public dataShakespeare text snippet into the
inputfolder of yourCloud Storage bucket:gcloud storage cp gs://pub/shakespeare/rose.txt \ gs://${BUCKET_NAME}/input/rose.txtSet up aJava (Apache Maven),Scala (SBT), orPythondevelopment environment.Use Cloud Shell.Cloud Shell includestools used in this tutorial, including Apache Maven, Python,and the Google Cloud CLI.
Prepare the Spark wordcount job
Select a tab, below, to follow the steps to prepare a job package or fileto submit to your cluster. You can prepare one of the following job types;
- Spark job in JavausingApache Maven to build aJAR package
- Spark job in Scala usingSBT to build aJAR package
- Spark job in Python (PySpark)
Java
- Copy
pom.xmlfile to your local machine. The followingpom.xmlfile specifies Scala and Spark library dependencies, which are given aprovidedscope to indicate that the Dataproc cluster will provide these libraries at runtime. Thepom.xmlfile does not specify a Cloud Storage dependency because the connector implements the standard HDFS interface. When a Spark job accesses Cloud Storage cluster files (files with URIs that start withgs://), the system automatically uses the Cloud Storage connector to access the files in Cloud StorageCheck your cluster image version. Replace theversion placeholders in the file to show the Spark and Scala library versions used by your cluster'simage version. Note that thespark-core_artifact number is the Scalamajor.minorversion number.<?xmlversion="1.0"encoding="UTF-8"?><projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>dataproc.codelab</groupId><artifactId>word-count</artifactId><version>1.0</version><properties><maven.compiler.source>1.8</maven.compiler.source><maven.compiler.target>1.8</maven.compiler.target></properties><dependencies><dependency><groupId>org.scala-lang</groupId><artifactId>scala-library</artifactId><version>Scalaversion,forexample,
2.11.8</version><scope>provided</scope></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_Scalamajor.minor.version,forexample,2.11</artifactId><version>Sparkversion,forexample,2.3.1</version><scope>provided</scope></dependency></dependencies></project> - Copy the
WordCount.javacode listed, below, to your local machine.- Create a set of directories with the path
src/main/java/dataproc/codelab:mkdir -p src/main/java/dataproc/codelab
- Copy
WordCount.javato your local machine intosrc/main/java/dataproc/codelab:cp WordCount.java src/main/java/dataproc/codelab
WordCount.javais a Spark job in Java that reads text files from Cloud Storage, performs a word count, then writes the text file results to Cloud Storage.packagedataproc.codelab;importjava.util.Arrays;importorg.apache.spark.SparkConf;importorg.apache.spark.api.java.JavaPairRDD;importorg.apache.spark.api.java.JavaRDD;importorg.apache.spark.api.java.JavaSparkContext;importscala.Tuple2;publicclassWordCount{publicstaticvoidmain(String[]args){if(args.length!=2){thrownewIllegalArgumentException("Exactly 2 arguments are required: <inputUri> <outputUri>");}StringinputPath=args[0];StringoutputPath=args[1];JavaSparkContextsparkContext=newJavaSparkContext(newSparkConf().setAppName("Word Count"));JavaRDD<String>lines=sparkContext.textFile(inputPath);JavaRDD<String>words=lines.flatMap((Stringline)->Arrays.asList(line.split(" ")).iterator());JavaPairRDD<String,Integer>wordCounts=words.mapToPair((Stringword)->newTuple2<>(word,1)).reduceByKey((Integercount1,Integercount2)->count1+count2);wordCounts.saveAsTextFile(outputPath);}}
- Create a set of directories with the path
- Build the package.
If the build is successful, amvn clean package
target/word-count-1.0.jaris created. - Stage the package to Cloud Storage.
gcloud storage cp target/word-count-1.0.jar \ gs://${BUCKET_NAME}/java/word-count-1.0.jar
Scala
- Copy
build.sbtfile to your local machine. The followingbuild.sbtfile specifies Scala and Spark library dependencies, which are given aprovidedscope to indicate that the Dataproc cluster will provide these libraries at runtime. Thebuild.sbtfile does not specify a Cloud Storage dependency because the connector implements the standard HDFS interface. When a Spark job accesses Cloud Storage cluster files (files with URIs that start withgs://), the system automatically uses the Cloud Storage connector to access the files in Cloud StorageCheck your cluster image verison. Replace theversion placeholders in the file to show the Spark and Scala library versions used by your cluster'simage version.scalaVersion:="Scala version, for example,
2.11.8"name:="word-count"organization:="dataproc.codelab"version:="1.0"libraryDependencies++=Seq("org.scala-lang"%"scala-library"%scalaVersion.value%"provided","org.apache.spark"%%"spark-core"%"Spark version, for example,2.3.1"%"provided") - Copy
word-count.scalato your local machine. This is a Spark job in Java that reads text files from Cloud Storage, performs a word count, then writes the text file results to Cloud Storage.packagedataproc.codelabimportorg.apache.spark.SparkContextimportorg.apache.spark.SparkConfobjectWordCount{defmain(args:Array[String]){if(args.length!=2){thrownewIllegalArgumentException("Exactly 2 arguments are required: <inputPath> <outputPath>")}valinputPath=args(0)valoutputPath=args(1)valsc=newSparkContext(newSparkConf().setAppName("Word Count"))vallines=sc.textFile(inputPath)valwords=lines.flatMap(line=>line.split(" "))valwordCounts=words.map(word=>(word,1)).reduceByKey(_+_)wordCounts.saveAsTextFile(outputPath)}}
- Build the package.
If the build is successful, asbt clean package
target/scala-2.11/word-count_2.11-1.0.jaris created. - Stage the package to Cloud Storage.
gcloud storage cp target/scala-2.11/word-count_2.11-1.0.jar \ gs://${BUCKET_NAME}/scala/word-count_2.11-1.0.jar
Python
- Copy
word-count.pyto your local machine. This is a Spark job in Python using PySpark that reads text files from Cloud Storage, performs a word count, then writes the text file results to Cloud Storage.#!/usr/bin/envpythonimportpysparkimportsysiflen(sys.argv)!=3:raiseException("Exactly 2 arguments are required: <inputUri> <outputUri>")inputUri=sys.argv[1]outputUri=sys.argv[2]sc=pyspark.SparkContext()lines=sc.textFile(sys.argv[1])words=lines.flatMap(lambdaline:line.split())wordCounts=words.map(lambdaword:(word,1)).reduceByKey(lambdacount1,count2:count1+count2)wordCounts.saveAsTextFile(sys.argv[2])
Submit the job
Run the followinggcloud command to submit the wordcount job to yourDataproc cluster.
Java
gcloud dataproc jobs submit spark \ --cluster=${CLUSTER} \ --class=dataproc.codelab.WordCount \ --jars=gs://${BUCKET_NAME}/java/word-count-1.0.jar \ --region=${REGION} \ -- gs://${BUCKET_NAME}/input/ gs://${BUCKET_NAME}/output/Scala
gcloud dataproc jobs submit spark \ --cluster=${CLUSTER} \ --class=dataproc.codelab.WordCount \ --jars=gs://${BUCKET_NAME}/scala/word-count_2.11-1.0.jar \ --region=${REGION} \ -- gs://${BUCKET_NAME}/input/ gs://${BUCKET_NAME}/output/Python
gcloud dataproc jobs submit pyspark word-count.py \ --cluster=${CLUSTER} \ --region=${REGION} \ -- gs://${BUCKET_NAME}/input/ gs://${BUCKET_NAME}/output/View the output
After the job finishes, run the following gcloud CLIcommand to view the wordcount output.
gcloud storage cat gs://${BUCKET_NAME}/output/*The wordcount output should be similar to the following:
(a,2)(call,1)(What's,1)(sweet.,1)(we,1)(as,1)(name?,1)(any,1)(other,1)(rose,1)(smell,1)(name,1)(would,1)(in,1)(which,1)(That,1)(By,1)
Clean up
After you finish the tutorial, you can clean up the resources that you created so that they stop using quota and incurring charges. The following sections describe how to delete or turn off these resources.
Delete the project
The easiest way to eliminate billing is to delete the project that you created for the tutorial.
To delete the project:
Delete the Dataproc cluster
Instead of deleting your project, you might want to onlydelete your cluster within the project.
Delete the Cloud Storage bucket
What's next
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-12-15 UTC.