Export metadata from Dataproc Metastore Stay organized with collections Save and categorize content based on your preferences.
This page explains how to export metadata fromDataproc Metastore.
The export metadata feature lets you save your metadata in a portable storageformat.
After you export your data, you can thenimport themetadata into anotherDataproc Metastore service or a self-managed Hive Metastore(HMS).
About exporting metadata
When you export metadata from Dataproc Metastore, the servicestores the data in one of the following file formats:
- A set of Avro files stored in a folder.
- A single MySQL dump file stored in a Cloud Storage folder.
Avro
Avro based exports are only supported for Hive versions 2.3.6 and 3.1.2. Whenyou export Avro files, Dataproc Metastore creates a<table-name>.avro file for each table in your database.
To export Avro files, your Dataproc Metastore service can usethe MySQL or Spanner database type.
MySQL
MySQL based exports are supported for all versions of Hive. When you exportMySQL files, Dataproc Metastore creates a single SQL file thatcontains all your table information.
To export MySQL files, your Dataproc Metastore service must usethe MySQL database type. The Spanner database type doesn't support MySQLimports.
Before you begin
- Enable Dataproc Metastorein your project.
- Understand networking requirementsspecific to your project.
- Create a Dataproc Metastore service.
Required roles
To get the permissions that you need to export metadata into Dataproc Metastore, ask your administrator to grant you the following IAM roles:
- To export metadata, either:
- Dataproc Metastore Editor (
roles/metastore.editor) on the Dataproc Metastore service - Dataproc Metastore Administrator (
roles/metastore.admin) on the Dataproc Metastore service - Dataproc Metastore Metadata Operator (
roles/metastore.metadataOperator) on the Dataproc Metastore service
- Dataproc Metastore Editor (
- For MySQL and Avro, to use the Cloud Storage object for export:grant your user account and the Dataproc Metastore service agent the Storage Creator role (
roles/storage.objectCreator) on the Cloud Storage bucket
For more information about granting roles, seeManage access to projects, folders, and organizations.
These predefined roles contain the permissions required to export metadata into Dataproc Metastore. To see the exact permissions that are required, expand theRequired permissions section:
Required permissions
The following permissions are required to export metadata into Dataproc Metastore:
- To export metadata:
metastore.services.exporton the metastore service - For MySQL and Avro, to use the Cloud Storage object for export, grant your user account and the Dataproc Metastore service agent:
storage.objects.createon the Cloud Storage bucket
You might also be able to get these permissions withcustom roles or otherpredefined roles.
For more information about specific Dataproc Metastore roles and permissions, seeDataproc Metastore IAM overview.Export metadata
Before exporting your metadata, note the following considerations:
- While an export is running, you can't update aDataproc Metastore service — for example changingconfiguration settings. However, you can still use it for normal operations,such as using it to access its metadata from attached Dataprocor self-managed clusters.
- The metadata export feature only exports metadata. Data that's created byApache Hive in internal tables isn't replicated in the export.
To export metadata from a Dataproc Metastore service, perform thefollowing steps.
Console
In the Google Cloud console, open theDataproc Metastore page:
On theDataproc Metastore page, click the name of the serviceyou want to export metadata from.
TheService detail page opens.

Dataproc Metastore Service detail page In the navigation bar, clickExport.
TheExport metadata page opens.
In theDestination section, choose eitherMySQL orAvro.
In theDestination URI field, clickBrowse and select theCloud Storage URI where you want to export your files to.
You can also enter your bucket location in the provided text field. Usethe following format:
Note: To store your exported metadata, the export job creates a newsubfolder in yourCloud Storage URI destination.bucket/objectorbucket/folder/object.To start the export, clickSubmit.
When finished, your export appears in a table on theService detailpage on theImport/Export tab.
When the export completes, Dataproc Metastore automaticallyreturns to theactive state,regardless of whether or not the export succeeded.
gcloud CLI
To export metadata from a service, run the following
gcloud metastore services export gcscommand:gcloud metastore services export gcsSERVICE \ --location=LOCATION \ --destination-folder=gs://bucket-name/path/to/folder \ --dump-type=DUMP_TYPE
Replace the following:
SERVICE: the name of yourDataproc Metastore service.LOCATION: the Google Cloud region in which yourDataproc Metastore service resides.bucket-name/path/to/folder: the Cloud Storagedestination folder where you want to store your export.DUMP_TYPE: the type of database dump to begenerated by the export. Accepted values includemysqlandavro.The default value ismysql.
Verify that the export was successful.
When the export completes, Dataproc Metastore automaticallyreturns to theactive state,regardless of whether or not the export succeeded.
REST
Follow the API instructions toexport metadata into a service by using the APIs Explorer.
When the export completes, the service automatically returns to theactive state,regardless of whether or not it succeeded.
View export history
To view the export history of a Dataproc Metastore service in theGoogle Cloud console, complete the following steps:
- In the Google Cloud console, open theDataproc Metastore page.
In the navigation bar, clickImport/Export.
Your export history appears in theExport history table.
The history displays up to the last 25 exports.
Deleting a Dataproc Metastore service also deletes all associatedexport history.
Troubleshoot common issues
Some common issues include the following:
- The service agent or user account doesn't have necessary permissions.
- Job fails because the database file is too large.
For more help solving common troubleshooting issues, seeImport and export error scenarios.
What's next
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.