Use the Cloud Storage connector with Apache Spark

This tutorial show you how to run example code that uses theCloud Storage connector withApache Spark.

Objectives

Write a simple wordcount Spark job in Java, Scala, orPython, then run the job on a Dataproc cluster.

All Dataproc cluster image versions havethe Spark components needed for this tutorial already installed.

Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use thepricing calculator.

New Google Cloud users might be eligible for afree trial.

Before you begin

Run the steps below to prepare to run the code in this tutorial.

  1. Set up your project. If necessary, set up a project with the Dataproc, Compute Engine,and Cloud Storage APIs enabled and the Google Cloud CLI installed on your local machine.

    1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
    2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Roles required to select or create a project

      • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
      • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
      Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

      Go to project selector

    3. Verify that billing is enabled for your Google Cloud project.

    4. Enable the Dataproc, Compute Engine, and Cloud Storage APIs.

      Roles required to enable APIs

      To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

      Enable the APIs

    5. Create a service account:

      1. Ensure that you have the Create Service Accounts IAM role (roles/iam.serviceAccountCreator) and the Project IAM Admin role (roles/resourcemanager.projectIamAdmin).Learn how to grant roles.
      2. In the Google Cloud console, go to theCreate service account page.

        Go to Create service account
      3. Select your project.
      4. In theService account name field, enter a name. The Google Cloud console fills in theService account ID field based on this name.

        In theService account description field, enter a description. For example,Service account for quickstart.

      5. ClickCreate and continue.
      6. Grant theProject > Owner role to the service account.

        To grant the role, find theSelect a role list, then selectProject > Owner.

        Note: TheRole field affects which resources the service account can access in your project. You can revoke these roles or grant additional roles later. In production environments, do not grant the Owner, Editor, or Viewer roles. Instead, grant apredefined role orcustom role that meets your needs.
      7. ClickContinue.
      8. ClickDone to finish creating the service account.

        Do not close your browser window. You will use it in the next step.

    6. Create a service account key:

      1. In the Google Cloud console, click the email address for the service account that you created.
      2. ClickKeys.
      3. ClickAdd key, and then clickCreate new key.
      4. ClickCreate. A JSON key file is downloaded to your computer.
      5. ClickClose.
    7. Set the environment variableGOOGLE_APPLICATION_CREDENTIALS to the path of the JSON file that contains your credentials. This variable applies only to your current shell session, so if you open a new session, set the variable again.

      Example: Linux or macOS

      exportGOOGLE_APPLICATION_CREDENTIALS="KEY_PATH"

      ReplaceKEY_PATH with the path of the JSON file that contains your credentials.

      For example:

      exportGOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"

      Example: Windows

      For PowerShell:

      $env:GOOGLE_APPLICATION_CREDENTIALS="KEY_PATH"

      ReplaceKEY_PATH with the path of the JSON file that contains your credentials.

      For example:

      $env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\service-account-file.json"

      For command prompt:

      setGOOGLE_APPLICATION_CREDENTIALS=KEY_PATH

      ReplaceKEY_PATH with the path of the JSON file that contains your credentials.

    8. Install the Google Cloud CLI.

    9. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    10. Toinitialize the gcloud CLI, run the following command:

      gcloudinit
    11. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

      Roles required to select or create a project

      • Select a project: Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
      • Create a project: To create a project, you need the Project Creator role (roles/resourcemanager.projectCreator), which contains theresourcemanager.projects.create permission.Learn how to grant roles.
      Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

      Go to project selector

    12. Verify that billing is enabled for your Google Cloud project.

    13. Enable the Dataproc, Compute Engine, and Cloud Storage APIs.

      Roles required to enable APIs

      To enable APIs, you need the Service Usage Admin IAM role (roles/serviceusage.serviceUsageAdmin), which contains theserviceusage.services.enable permission.Learn how to grant roles.

      Enable the APIs

    14. Create a service account:

      1. Ensure that you have the Create Service Accounts IAM role (roles/iam.serviceAccountCreator) and the Project IAM Admin role (roles/resourcemanager.projectIamAdmin).Learn how to grant roles.
      2. In the Google Cloud console, go to theCreate service account page.

        Go to Create service account
      3. Select your project.
      4. In theService account name field, enter a name. The Google Cloud console fills in theService account ID field based on this name.

        In theService account description field, enter a description. For example,Service account for quickstart.

      5. ClickCreate and continue.
      6. Grant theProject > Owner role to the service account.

        To grant the role, find theSelect a role list, then selectProject > Owner.

        Note: TheRole field affects which resources the service account can access in your project. You can revoke these roles or grant additional roles later. In production environments, do not grant the Owner, Editor, or Viewer roles. Instead, grant apredefined role orcustom role that meets your needs.
      7. ClickContinue.
      8. ClickDone to finish creating the service account.

        Do not close your browser window. You will use it in the next step.

    15. Create a service account key:

      1. In the Google Cloud console, click the email address for the service account that you created.
      2. ClickKeys.
      3. ClickAdd key, and then clickCreate new key.
      4. ClickCreate. A JSON key file is downloaded to your computer.
      5. ClickClose.
    16. Set the environment variableGOOGLE_APPLICATION_CREDENTIALS to the path of the JSON file that contains your credentials. This variable applies only to your current shell session, so if you open a new session, set the variable again.

      Example: Linux or macOS

      exportGOOGLE_APPLICATION_CREDENTIALS="KEY_PATH"

      ReplaceKEY_PATH with the path of the JSON file that contains your credentials.

      For example:

      exportGOOGLE_APPLICATION_CREDENTIALS="/home/user/Downloads/service-account-file.json"

      Example: Windows

      For PowerShell:

      $env:GOOGLE_APPLICATION_CREDENTIALS="KEY_PATH"

      ReplaceKEY_PATH with the path of the JSON file that contains your credentials.

      For example:

      $env:GOOGLE_APPLICATION_CREDENTIALS="C:\Users\username\Downloads\service-account-file.json"

      For command prompt:

      setGOOGLE_APPLICATION_CREDENTIALS=KEY_PATH

      ReplaceKEY_PATH with the path of the JSON file that contains your credentials.

    17. Install the Google Cloud CLI.

    18. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

    19. Toinitialize the gcloud CLI, run the following command:

      gcloudinit

  2. Create a Cloud Storage bucket. You need a Cloud Storage to hold tutorial data. If you do not have oneready to use, create a new bucket in your project.

    1. In the Google Cloud console, go to the Cloud StorageBuckets page.

      Go to Buckets

    2. ClickCreate.
    3. On theCreate a bucket page, enter your bucket information. To go to the next step, clickContinue.
      1. In theGet started section, do the following:
        • Enter a globally unique name that meets thebucket naming requirements.
        • To add abucket label, expand theLabels section (), clickAdd label, and specify akey and avalue for your label.
      2. In theChoose where to store your data section, do the following:
        1. Select aLocation type.
        2. Choose a location where your bucket's data is permanently stored from theLocation type drop-down menu.
        3. To set upcross-bucket replication, selectAdd cross-bucket replication via Storage Transfer Service and follow these steps:

          Set up cross-bucket replication

          1. In theBucket menu, select a bucket.
          2. In theReplication settings section, clickConfigure to configure settings for the replication job.

            TheConfigure cross-bucket replication pane appears.

            • To filter objects to replicate by object name prefix, enter a prefix that you want to include or exclude objects from, then clickAdd a prefix.
            • To set a storage class for the replicated objects, select a storage class from theStorage class menu. If you skip this step, the replicated objects will use the destination bucket's storage class by default.
            • ClickDone.
      3. In theChoose how to store your data section, do the following:
        1. Select adefault storage class for the bucket orAutoclass for automatic storage class management of your bucket's data.
        2. To enablehierarchical namespace, in theOptimize storage for data-intensive workloads section, selectEnable hierarchical namespace on this bucket.Note: You cannot enable hierarchical namespace in existing buckets.
      4. In theChoose how to control access to objects section, select whether or not your bucket enforcespublic access prevention, and select anaccess control method for your bucket's objects.Note: You cannot change thePrevent public access setting if this setting is enforced at anorganization policy.
      5. In theChoose how to protect object data section, do the following:
        • Select any of the options underData protection that you want to set for your bucket.
          • To enablesoft delete, click theSoft delete policy (For data recovery) checkbox, and specify the number of days you want to retain objects after deletion.
          • To setObject Versioning, click theObject versioning (For version control) checkbox, and specify the maximum number of versions per object and the number of days after which the noncurrent versions expire.
          • To enable the retention policy on objects and buckets, click theRetention (For compliance) checkbox, and then do the following:
            • To enableObject Retention Lock, click theEnable object retention checkbox.
            • To enableBucket Lock, click theSet bucket retention policy checkbox, and choose a unit of time and a length of time for your retention period.
        • To choose how your object data will be encrypted, expand theData encryption section (), and select aData encryption method.
    4. ClickCreate.

  3. Set local environment variables. Set environment variables on yourlocal machine. Set your Google Cloud project-id and the name of theCloud Storage bucket you will use for this tutorial. Also provide thename andregionof an existing or new Dataproc cluster.You can create a cluster to use in this tutorial in the next step.

    PROJECT=project-id
    BUCKET_NAME=bucket-name
    CLUSTER=cluster-name
    REGION=cluster-region Example: "us-central1"

  4. Create a Dataproc cluster. Run the command, below, tocreate asingle-nodeDataproc cluster in the specifiedCompute Engine zone.

    gcloud dataproc clusters create ${CLUSTER} \    --project=${PROJECT} \    --region=${REGION} \    --single-node
    The above command installs the defaultcluster image version.You can use the--image-versionflag to select an image version for your cluster. Each image version installsspecific versions of Spark and Scala library components. If youprepare the Spark wordcount jobin Java or Scala, you will reference the Spark and Scala versions installedon your cluster when you prepare the job package.

  5. Copy public data to your Cloud Storage bucket. Copy a public dataShakespeare text snippet into theinput folder of yourCloud Storage bucket:

    gcloud storage cp gs://pub/shakespeare/rose.txt \    gs://${BUCKET_NAME}/input/rose.txt

  6. Set up aJava (Apache Maven),Scala (SBT), orPythondevelopment environment.Use Cloud Shell.Cloud Shell includestools used in this tutorial, including Apache Maven, Python,and the Google Cloud CLI.

Prepare the Spark wordcount job

Select a tab, below, to follow the steps to prepare a job package or fileto submit to your cluster. You can prepare one of the following job types;

Java

  1. Copypom.xml file to your local machine. The followingpom.xml file specifies Scala and Spark library dependencies, which are given aprovided scope to indicate that the Dataproc cluster will provide these libraries at runtime. Thepom.xml file does not specify a Cloud Storage dependency because the connector implements the standard HDFS interface. When a Spark job accesses Cloud Storage cluster files (files with URIs that start withgs://), the system automatically uses the Cloud Storage connector to access the files in Cloud StorageCheck your cluster image version. Replace theversion placeholders in the file to show the Spark and Scala library versions used by your cluster'simage version. Note that thespark-core_ artifact number is the Scalamajor.minor version number.
    <?xmlversion="1.0"encoding="UTF-8"?><projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion><groupId>dataproc.codelab</groupId><artifactId>word-count</artifactId><version>1.0</version><properties><maven.compiler.source>1.8</maven.compiler.source><maven.compiler.target>1.8</maven.compiler.target></properties><dependencies><dependency><groupId>org.scala-lang</groupId><artifactId>scala-library</artifactId><version>Scalaversion,forexample,2.11.8</version><scope>provided</scope></dependency><dependency><groupId>org.apache.spark</groupId><artifactId>spark-core_Scalamajor.minor.version,forexample,2.11</artifactId><version>Sparkversion,forexample,2.3.1</version><scope>provided</scope></dependency></dependencies></project>
  2. Copy theWordCount.java code listed, below, to your local machine.
    1. Create a set of directories with the pathsrc/main/java/dataproc/codelab:
      mkdir -p src/main/java/dataproc/codelab
    2. CopyWordCount.java to your local machine intosrc/main/java/dataproc/codelab:
      cp WordCount.java src/main/java/dataproc/codelab

    WordCount.java is a Spark job in Java that reads text files from Cloud Storage, performs a word count, then writes the text file results to Cloud Storage.

    packagedataproc.codelab;importjava.util.Arrays;importorg.apache.spark.SparkConf;importorg.apache.spark.api.java.JavaPairRDD;importorg.apache.spark.api.java.JavaRDD;importorg.apache.spark.api.java.JavaSparkContext;importscala.Tuple2;publicclassWordCount{publicstaticvoidmain(String[]args){if(args.length!=2){thrownewIllegalArgumentException("Exactly 2 arguments are required: <inputUri> <outputUri>");}StringinputPath=args[0];StringoutputPath=args[1];JavaSparkContextsparkContext=newJavaSparkContext(newSparkConf().setAppName("Word Count"));JavaRDD<String>lines=sparkContext.textFile(inputPath);JavaRDD<String>words=lines.flatMap((Stringline)->Arrays.asList(line.split(" ")).iterator());JavaPairRDD<String,Integer>wordCounts=words.mapToPair((Stringword)->newTuple2<>(word,1)).reduceByKey((Integercount1,Integercount2)->count1+count2);wordCounts.saveAsTextFile(outputPath);}}
  3. Build the package.
    mvn clean package
    If the build is successful, atarget/word-count-1.0.jaris created.
  4. Stage the package to Cloud Storage.
    gcloud storage cp target/word-count-1.0.jar \    gs://${BUCKET_NAME}/java/word-count-1.0.jar

Scala

  1. Copybuild.sbt file to your local machine. The followingbuild.sbt file specifies Scala and Spark library dependencies, which are given aprovided scope to indicate that the Dataproc cluster will provide these libraries at runtime. Thebuild.sbt file does not specify a Cloud Storage dependency because the connector implements the standard HDFS interface. When a Spark job accesses Cloud Storage cluster files (files with URIs that start withgs://), the system automatically uses the Cloud Storage connector to access the files in Cloud StorageCheck your cluster image verison. Replace theversion placeholders in the file to show the Spark and Scala library versions used by your cluster'simage version.
    scalaVersion:="Scala version, for example,2.11.8"name:="word-count"organization:="dataproc.codelab"version:="1.0"libraryDependencies++=Seq("org.scala-lang"%"scala-library"%scalaVersion.value%"provided","org.apache.spark"%%"spark-core"%"Spark version, for example,2.3.1"%"provided")
  2. Copyword-count.scala to your local machine. This is a Spark job in Java that reads text files from Cloud Storage, performs a word count, then writes the text file results to Cloud Storage.
    packagedataproc.codelabimportorg.apache.spark.SparkContextimportorg.apache.spark.SparkConfobjectWordCount{defmain(args:Array[String]){if(args.length!=2){thrownewIllegalArgumentException("Exactly 2 arguments are required: <inputPath> <outputPath>")}valinputPath=args(0)valoutputPath=args(1)valsc=newSparkContext(newSparkConf().setAppName("Word Count"))vallines=sc.textFile(inputPath)valwords=lines.flatMap(line=>line.split(" "))valwordCounts=words.map(word=>(word,1)).reduceByKey(_+_)wordCounts.saveAsTextFile(outputPath)}}
  3. Build the package.
    sbt clean package
    If the build is successful, atarget/scala-2.11/word-count_2.11-1.0.jaris created.
  4. Stage the package to Cloud Storage.
    gcloud storage cp target/scala-2.11/word-count_2.11-1.0.jar \    gs://${BUCKET_NAME}/scala/word-count_2.11-1.0.jar

Python

  1. Copyword-count.py to your local machine. This is a Spark job in Python using PySpark that reads text files from Cloud Storage, performs a word count, then writes the text file results to Cloud Storage.
    #!/usr/bin/envpythonimportpysparkimportsysiflen(sys.argv)!=3:raiseException("Exactly 2 arguments are required: <inputUri> <outputUri>")inputUri=sys.argv[1]outputUri=sys.argv[2]sc=pyspark.SparkContext()lines=sc.textFile(sys.argv[1])words=lines.flatMap(lambdaline:line.split())wordCounts=words.map(lambdaword:(word,1)).reduceByKey(lambdacount1,count2:count1+count2)wordCounts.saveAsTextFile(sys.argv[2])

Submit the job

Run the followinggcloud command to submit the wordcount job to yourDataproc cluster.

Java

gcloud dataproc jobs submit spark \    --cluster=${CLUSTER} \    --class=dataproc.codelab.WordCount \    --jars=gs://${BUCKET_NAME}/java/word-count-1.0.jar \    --region=${REGION} \    -- gs://${BUCKET_NAME}/input/ gs://${BUCKET_NAME}/output/

Scala

gcloud dataproc jobs submit spark \    --cluster=${CLUSTER} \    --class=dataproc.codelab.WordCount \    --jars=gs://${BUCKET_NAME}/scala/word-count_2.11-1.0.jar \    --region=${REGION} \    -- gs://${BUCKET_NAME}/input/ gs://${BUCKET_NAME}/output/

Python

gcloud dataproc jobs submit pyspark word-count.py \    --cluster=${CLUSTER} \    --region=${REGION} \    -- gs://${BUCKET_NAME}/input/ gs://${BUCKET_NAME}/output/

View the output

After the job finishes, run the following gcloud CLIcommand to view the wordcount output.

gcloud storage cat gs://${BUCKET_NAME}/output/*

The wordcount output should be similar to the following:

(a,2)(call,1)(What's,1)(sweet.,1)(we,1)(as,1)(name?,1)(any,1)(other,1)(rose,1)(smell,1)(name,1)(would,1)(in,1)(which,1)(That,1)(By,1)

Clean up

After you finish the tutorial, you can clean up the resources that you created so that they stop using quota and incurring charges. The following sections describe how to delete or turn off these resources.

Delete the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

    Caution: Deleting a project has the following effects:
    • Everything in the project is deleted. If you used an existing project for the tasks in this document, when you delete it, you also delete any other work you've done in the project.
    • Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as anappspot.com URL, delete selected resources inside the project instead of deleting the whole project.

    If you plan to explore multiple architectures, tutorials, or quickstarts, reusing projects can help you avoid exceeding project quota limits.

  1. In the Google Cloud console, go to theManage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then clickDelete.
  3. In the dialog, type the project ID, and then clickShut down to delete the project.

Delete the Dataproc cluster

Instead of deleting your project, you might want to onlydelete your cluster within the project.

Delete the Cloud Storage bucket

  • In the Google Cloud console, go to the Cloud StorageBuckets page.

    Go to Buckets

  • Click the checkbox for the bucket that you want to delete.
  • To delete the bucket, clickDelete, and then follow the instructions.
  • What's next

    Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

    Last updated 2025-12-15 UTC.