Serverless for Apache Spark staging buckets

This document provides information about Serverless for Apache Spark staging buckets.Serverless for Apache Spark creates a Cloud Storage staging bucket in your projector reuses an existing staging bucket from previous batchcreation requests. This is the default bucket created byDataproc on Compute Engine clusters. For moreinformation, seeDataproc staging and temp buckets.

Serverless for Apache Spark stores workload dependencies, config files, andjob driver console output in the staging bucket.

Serverless for Apache Spark sets regional staging buckets inCloud Storage locationsaccording to the Compute Engine zone where your workload is deployed,and then creates and manages these project-level, per-location buckets.Serverless for Apache Spark-created staging buckets are shared amongworkloads in the same region, and are created with aCloud Storagesoft delete retentionduration set to 0 seconds.

To locate the Dataproc default stagingbucket, in the Google Cloud console, go toCloud Storageand filter the results using thedataproc-staging- prefix.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-19 UTC.