Create a healthcare search data store

Caution:

To search clinical data in Vertex AI Search, you can follow one ofthese workflows:

  • Create a healthcare data store, import FHIR R4 data into the data store,connect it to a healthcare search app, and query the clinical data.
  • Create a healthcare search app, create a healthcare data store and import FHIR R4 data intothe data store during the app creation process, and query the clinical data.For more information, seeCreate a healthcare search app.

This page describes the first method.

About data import frequency

You can import FHIR R4 data into a data store in the following ways:

  • Batch import: a one-time import. Data isimported into a data store in batches. For further incremental imports, seeRefresh healthcare data.

  • Streaming import: a near real-time streamingdata import. Any incremental changes in the source FHIR store are synchronizedin the Vertex AI Search data store. Streaming requires adataconnector, which is a type of a data store. To create a data connector, youmust set up a collection. A data connector contains anentity, which is alsoa data store instance.

    You can also pause and resume streaming and performmanual synchronization whenever necessary. For more information, seeManage a healthcare search data store.

    The data streaming rate for a given Google Cloud project isdependent on the following quotas. If you exceed the quota you mightexperience streaming delays.

You can select the data import frequency at the time of data store creation andyou can't change this configuration later.

Before you begin

Before you create the healthcare data store and import data into it,understand the following:

  • The relationship between apps and data stores for healthcare search.For more information, seeAbout apps and data stores.

  • Thepreparation of your FHIR data for ingestion.

  • Vertex AI Search for healthcare provides search services only in theUS multi-region (us). Therefore, your healthcare search app and data storesmust reside in theus multi-region.

  • If you're importing healthcare data from a Cloud Healthcare API FHIR store inone Google Cloud project to a Vertex AI Search data store in adifferent Google Cloud project and you're using VPC Service Controls, the twoprojects must be in the sameperimeter.

Create a data store and import your data

You can create a data store and import your FHIR R4 data either in theGoogle Cloud console or using the API with the following approaches:

Permissions required for this task

Grant the following Identity and Access Management (IAM) roles to theservice-PROJECT_NUMBER@gcp-sa-discoveryengine.iam.gserviceaccount.comservice account in the project that contains the Vertex AI Search data store:

PurposeRoles
To perform a one-time batch import of FHIR data from FHIR stores in Cloud Healthcare API.
To perform a streaming import of FHIR data from FHIR stores in Cloud Healthcare API in the same Google Cloud project.
To perform a streaming import of FHIR data from FHIR stores in Cloud Healthcare API in a different Google Cloud project.
To import FHIR data that references files in Cloud Storage. These are granted by default if the referenced files are in the same Google Cloud project as the Vertex AI Search app.Storage Object Admin (roles/storage.objectAdmin)
To customize the schema when creating a data store to configure the indexability, searchability, and retrievability of FHIR resources and elements.Storage Object Admin (roles/storage.objectAdmin)

Grant the following Identity and Access Management roles to theservice-PROJECT_NUMBER@gcp-sa-discoveryengine.iam.gserviceaccount.comservice account in the project that contains the Cloud Healthcare API FHIR R4 data store:

PurposeRoles
To perform a streaming import of FHIR data from FHIR stores in Cloud Healthcare API in a different Google Cloud project.

Grant the following Identity and Access Management roles to theservice-SOURCE_PROJECT_NUMBER@gcp-sa-healthcare.iam.gserviceaccount.comservice account in the project that contains the Cloud Healthcare API FHIR R4 data store:

PurposeRoles
To perform a streaming import of FHIR data from FHIR stores in Cloud Healthcare API in the same Google Cloud project.
To customize the schema when creating a data store to configure the indexability, searchability, and retrievability of FHIR resources and elements.Storage Object Admin (roles/storage.objectAdmin)

Create a static data store and perform a one-time batch import

This section describes how to create a Vertex AI Search data store inwhich you can only perform batch imports. You canimport batch data when you first create the data store andperform incremental batch imports whenever necessary.

Console

  1. In the Google Cloud console, go to theAI Applications page.

    AI Applications

  2. In the navigation menu, clickData Stores.

  3. ClickCreate data store.

  4. In theSelect a data source pane, selectHealthcare API (FHIR) as your data source.
  5. To import data from your FHIR store, do one of the following:
    • Select the FHIR store from the list of available FHIR stores:
      1. Expand theFHIR store field.
      2. In this list, select a dataset that resides in apermitted location and then select a FHIR store that uses FHIR version R4.
    • Enter the FHIR store manually:
      1. Expand theFHIR store field.
      2. ClickEnter FHIR store manually.
      3. In theFHIR store name dialog, enter the full name of the FHIR store in the following format:

        project/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID/fhirStores/FHIR_STORE_ID

      4. ClickSave.
  6. In theSynchronization section, select one of the following options.This selection cannot be changed after the data store is created.
    • One time: to perform a one-time batch data import. For further incremental imports, seeRefresh healthcare data.
    • Streaming: to perform a near real-time streaming data import. To stream data, you must create a data connector, which is a type of a data store. To set up a streaming data store using the REST API, contact your customer engineer.
  7. In theWhat is the schema for this data? section, select one ofthese options:
    • Google predefined schema: to retain the Google-defined schema configurations, such as indexability, searchability, and retrievability, for the supported FHIR resources and elements. After you select this option, you cannot update the schema after you create the data store. If you want to be able to change the schema after the data store creation, select theCustom schema option.
      1. ClickContinue.
      2. In theYour data store name field, enter a name for your data store.
      3. ClickCreate.
      4. The data store you created is listed on theData Stores page.

    • Custom schema: to define your own schema configurations, such as indexability, searchability, and retrievability, for the supported FHIR resources and elements. To set up a configurable schema, contact your customer engineer.
      1. ClickContinue.
      2. Review the schema, expand each field, and edit the field settings.
      3. ClickAdd new fields to add new fields on the supported FHIR resources. You cannot remove the fields provided in the Google-defined schema.
      4. ClickContinue.
      5. In theYour data connector name field, enter a name for your data connector.
      6. ClickCreate.
      7. The data connector you created is listed on theData Stores page. The source FHIR store is added as an entity within the data connector.

  8. ClickContinue.

REST

  1. Create a data store.

    curl -X POST\ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json"\ -H "X-Goog-User-Project:PROJECT_ID" \"https://us-discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/us/collections/default_collection/dataStores?dataStoreId=DATA_STORE_ID" \ -d '{    "displayName": "DATA_STORE_DISPLAY_NAME",    "industryVertical": "HEALTHCARE_FHIR",    "solutionTypes": ["SOLUTION_TYPE_SEARCH"],    "searchTier": "STANDARD",    "searchAddOns": ["LLM"],    "healthcareFhirConfig":      {        "enableConfigurableSchema":CONFIGURABLE_SCHEMA_TRUE|FALSE      }}'

    Replace the following:

    • PROJECT_ID: the ID of your Google Cloud project.
    • DATA_STORE_ID: the ID of the Vertex AI Search data store that you want to create. This ID can contain only lowercaseletters, digits, underscores, and hyphens.
    • DATA_STORE_DISPLAY_NAME: the display name of the Vertex AISearch data store that you want to create.
    • CONFIGURABLE_SCHEMA_TRUE|FALSE: a boolean when set totrue lets you configure the data store schema using theschema.update method.

    Response

    You should receive a JSON response similar to the following. If the valuefor thedone key istrue, it indicates that theoperation to create the data store was completed. Otherwise, the data storecreation operation was unsuccessful.

    {  "name": "OPERATION_ID",  "done": true}
  2. If the source FHIR store and the target Vertex AI Search data storeare in the same Google Cloud project, call the following method to perform aone-time batch import. If they're not in the same project, go to the nextstep.

    curl -X POST \-H "Authorization: Bearer $(gcloud auth print-access-token)" \-H "Content-Type: application/json; charset=utf-8" \-H "X-Goog-User-Project:PROJECT_ID" \"https://us-discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/us/dataStores/DATA_STORE_ID/branches/0/documents:import" \-d '{   "reconciliation_mode": "FULL",   "fhir_store_source": {"fhir_store": "projects/PROJECT_ID/locations/CLOUD_HEALTHCARE_DATASET_LOCATION/datasets/CLOUD_HEALTHCARE_DATASET_ID/fhirStores/FHIR_STORE_ID"}}'

    Replace the following:

    • PROJECT_ID: the ID of your Google Cloud project.
    • DATA_STORE_ID: the ID of the Vertex AI Search data store.
    • CLOUD_HEALTHCARE_DATASET_ID: the ID of the Cloud Healthcare API dataset that contains the source FHIR store.
    • CLOUD_HEALTHCARE_DATASET_LOCATION: the location of the Cloud Healthcare API dataset that contains the source FHIR store.
    • FHIR_STORE_ID: the ID of the Cloud Healthcare API FHIR R4 store.

    Response

    You should receive a JSON response similar to the following. The responsecontains an identifier for a long-running operation. Long-running operationsare returned when method calls might take a substantial amount of time tocomplete. Note the value ofIMPORT_OPERATION_ID. You need thisvalue toverify the status of the importorcancel an ongoing batch import.

      {    "name": "projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/DATA_STORE_ID/branches/0/operations/IMPORT_OPERATION_ID",    "metadata": {      "@type": "type.googleapis.com/google.cloud.discoveryengine.v1.ImportDocumentsMetadata"    }  }
  3. If the source FHIR store and the target Vertex AI Search data storeare in different Google Cloud projects, call the following method to performa one-time batch import. If they're in the same project, go back to theprevious step.

    curl -X POST \-H "Authorization: Bearer $(gcloud auth print-access-token)" \-H "Content-Type: application/json; charset=utf-8" \-H "X-Goog-User-Project:PROJECT_ID" \"https://us-discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/us/dataStores/DATA_STORE_ID/branches/0/documents:import" \-d '{   "reconciliation_mode": "FULL",   "fhir_store_source": {"fhir_store": "projects/SOURCE_PROJECT_ID/locations/CLOUD_HEALTHCARE_DATASET_LOCATION/datasets/CLOUD_HEALTHCARE_DATASET_ID/fhirStores/FHIR_STORE_ID"}}'

    Replace the following:

    • PROJECT_ID: the ID of the Google Cloud project that contains the Vertex AI Search data store.
    • DATA_STORE_ID: the ID of the Vertex AI Search data store.
    • SOURCE_PROJECT_ID: the ID of the Google Cloud project that contains the Cloud Healthcare API dataset and FHIR store.
    • CLOUD_HEALTHCARE_DATASET_ID: the ID of the Cloud Healthcare API dataset that contains the source FHIR store.
    • CLOUD_HEALTHCARE_DATASET_LOCATION: the location of the Cloud Healthcare API dataset that contains the source FHIR store.
    • FHIR_STORE_ID: the ID of the Cloud Healthcare API FHIR R4 store.

    Response

    You should receive a JSON response similar to the following. The responsecontains an identifier for a long-running operation. Long-runningoperations are returned when method calls might take a substantial amountof time to complete. Note the value ofIMPORT_OPERATION_ID. You needthis value toverify the status of the import.

    {  "name": "projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/DATA_STORE_ID/branches/0/operations/IMPORT_OPERATION_ID",  "metadata": {    "@type": "type.googleapis.com/google.cloud.discoveryengine.v1.ImportDocumentsMetadata"  }}

Python

For more information, see theVertex AI SearchPython API reference documentation.

To authenticate to Vertex AI Search, set up Application Default Credentials. For more information, seeSet up authentication for a local development environment.

Create a data store

fromgoogle.api_core.client_optionsimportClientOptionsfromgoogle.cloudimportdiscoveryengine# TODO(developer): Uncomment these variables before running the sample.# project_id = "YOUR_PROJECT_ID"# location = "YOUR_LOCATION" # Values: "global"# data_store_id = "YOUR_DATA_STORE_ID"defcreate_data_store_sample(project_id:str,location:str,data_store_id:str,)->str:#  For more information, refer to:# https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_storeclient_options=(ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")iflocation!="global"elseNone)# Create a clientclient=discoveryengine.DataStoreServiceClient(client_options=client_options)# The full resource name of the collection# e.g. projects/{project}/locations/{location}/collections/default_collectionparent=client.collection_path(project=project_id,location=location,collection="default_collection",)data_store=discoveryengine.DataStore(display_name="My Data Store",# Options: GENERIC, MEDIA, HEALTHCARE_FHIRindustry_vertical=discoveryengine.IndustryVertical.GENERIC,# Options: SOLUTION_TYPE_RECOMMENDATION, SOLUTION_TYPE_SEARCH, SOLUTION_TYPE_CHAT, SOLUTION_TYPE_GENERATIVE_CHATsolution_types=[discoveryengine.SolutionType.SOLUTION_TYPE_SEARCH],# TODO(developer): Update content_config based on data store type.# Options: NO_CONTENT, CONTENT_REQUIRED, PUBLIC_WEBSITEcontent_config=discoveryengine.DataStore.ContentConfig.CONTENT_REQUIRED,)request=discoveryengine.CreateDataStoreRequest(parent=parent,data_store_id=data_store_id,data_store=data_store,# Optional: For Advanced Site Search Only# create_advanced_site_search=True,)# Make the requestoperation=client.create_data_store(request=request)print(f"Waiting for operation to complete:{operation.operation.name}")response=operation.result()# After the operation is complete,# get information from operation metadatametadata=discoveryengine.CreateDataStoreMetadata(operation.metadata)# Handle the responseprint(response)print(metadata)returnoperation.operation.name

Import documents

fromgoogle.api_core.client_optionsimportClientOptionsfromgoogle.cloudimportdiscoveryengine# TODO(developer): Uncomment these variables before running the sample.# project_id = "YOUR_PROJECT_ID"# location = "YOUR_LOCATION" # Values: "us"# data_store_id = "YOUR_DATA_STORE_ID"# healthcare_project_id = "YOUR_HEALTHCARE_PROJECT_ID"# healthcare_location = "YOUR_HEALTHCARE_LOCATION"# healthcare_dataset_id = "YOUR_HEALTHCARE_DATASET_ID"# healthcare_fihr_store_id = "YOUR_HEALTHCARE_FHIR_STORE_ID"#  For more information, refer to:# https://cloud.google.com/generative-ai-app-builder/docs/locations#specify_a_multi-region_for_your_data_storeclient_options=(ClientOptions(api_endpoint=f"{location}-discoveryengine.googleapis.com")iflocation!="global"elseNone)# Create a clientclient=discoveryengine.DocumentServiceClient(client_options=client_options)# The full resource name of the search engine branch.# e.g. projects/{project}/locations/{location}/dataStores/{data_store_id}/branches/{branch}parent=client.branch_path(project=project_id,location=location,data_store=data_store_id,branch="default_branch",)request=discoveryengine.ImportDocumentsRequest(parent=parent,fhir_store_source=discoveryengine.FhirStoreSource(fhir_store=client.fhir_store_path(healthcare_project_id,healthcare_location,healthcare_dataset_id,healthcare_fihr_store_id,),),# Options: `FULL`, `INCREMENTAL`reconciliation_mode=discoveryengine.ImportDocumentsRequest.ReconciliationMode.INCREMENTAL,)# Make the requestoperation=client.import_documents(request=request)print(f"Waiting for operation to complete:{operation.operation.name}")response=operation.result()# After the operation is complete,# get information from operation metadatametadata=discoveryengine.ImportDocumentsMetadata(operation.metadata)# Handle the responseprint(response)print(metadata)

What's next

Create a streaming data store and set up a streaming import

This section describes how to create a streaming Vertex AI Search datastore that continuously streams changes from your Cloud Healthcare API FHIRstore.

Note: If you have set up Pub/Sub notifications for your FHIR resourcesin Cloud Healthcare API, the Cloud Healthcare API doesn't send notifications whena FHIR resource is imported from Cloud Storage. For more information, seeFHIRPub/Sub notifications.

Console

  1. In the Google Cloud console, go to theAI Applications page.

    AI Applications

  2. In the navigation menu, clickData Stores.

  3. ClickCreate data store.

  4. In theSelect a data source pane, selectHealthcare API (FHIR) as your data source.
  5. To import data from your FHIR store, do one of the following:
    • Select the FHIR store from the list of available FHIR stores:
      1. Expand theFHIR store field.
      2. In this list, select a dataset that resides in apermitted location and then select a FHIR store that uses FHIR version R4.
    • Enter the FHIR store manually:
      1. Expand theFHIR store field.
      2. ClickEnter FHIR store manually.
      3. In theFHIR store name dialog, enter the full name of the FHIR store in the following format:

        project/PROJECT_ID/locations/LOCATION/datasets/DATASET_ID/fhirStores/FHIR_STORE_ID

      4. ClickSave.
  6. In theSynchronization section, select one of the following options.This selection cannot be changed after the data store is created.
    • One time: to perform a one-time batch data import. For further incremental imports, seeRefresh healthcare data.
    • Streaming: to perform a near real-time streaming data import. To stream data, you must create a data connector, which is a type of a data store. To set up a streaming data store using the REST API, contact your customer engineer.
  7. In theWhat is the schema for this data? section, select one ofthese options:
    • Google predefined schema: to retain the Google-defined schema configurations, such as indexability, searchability, and retrievability, for the supported FHIR resources and elements. After you select this option, you cannot update the schema after you create the data store. If you want to be able to change the schema after the data store creation, select theCustom schema option.
      1. ClickContinue.
      2. In theYour data store name field, enter a name for your data store.
      3. ClickCreate.
      4. The data store you created is listed on theData Stores page.

    • Custom schema: to define your own schema configurations, such as indexability, searchability, and retrievability, for the supported FHIR resources and elements. To set up a configurable schema, contact your customer engineer.
      1. ClickContinue.
      2. Review the schema, expand each field, and edit the field settings.
      3. ClickAdd new fields to add new fields on the supported FHIR resources. You cannot remove the fields provided in the Google-defined schema.
      4. ClickContinue.
      5. In theYour data connector name field, enter a name for your data connector.
      6. ClickCreate.
      7. The data connector you created is listed on theData Stores page. The source FHIR store is added as an entity within the data connector.

  8. ClickContinue.

REST

  1. Create a data connector to set up streaming.

    curl -X POST \-H "Authorization: Bearer $(gcloud auth print-access-token)" \-H "Content-Type: application/json" \-H "X-Goog-User-Project:PROJECT_ID" \"https://us-discoveryengine.googleapis.com/v1alpha/projects/PROJECT_ID/locations/us:setUpDataConnector" \-d ' {  "collectionId": "COLLECTION_ID",  "collectionDisplayName": "COLLECTION_NAME",  "dataConnector": {  "dataSource": "gcp_fhir",  "params": {      "instance_uri": "projects/SOURCE_PROJECT_ID/locations/CLOUD_HEALTHCARE_DATASET_LOCATION/datasets/CLOUD_HEALTHCARE_DATASET_ID"    },    "entities": [      {        "entityName": "FHIR_STORE_NAME"        "healthcareFhirConfig": {          "enableConfigurableSchema":CONFIGURABLE_SCHEMA_TRUE|FALSE,          "enableStaticIndexingForBatchIngestion":STATIC_INDEXING_TRUE|FALSE        }      }    ],    "syncMode": "STREAMING"  }}'

    Replace the following:

    • PROJECT_ID: the ID of your Google Cloud project.
    • COLLECTION_ID: the ID of the collection to which you want to stream the FHIR R4 data.
    • COLLECTION_NAME: name of the collection to which you want to stream the FHIR R4 data.
    • SOURCE_PROJECT_ID: the ID of the Google Cloud project that contains the Cloud Healthcare API dataset and FHIR store.
    • CLOUD_HEALTHCARE_DATASET_ID: the ID of the Cloud Healthcare API dataset that contains the source FHIR store.
    • CLOUD_HEALTHCARE_DATASET_LOCATION: the location of the Cloud Healthcare API dataset that contains the source FHIR store.
    • FHIR_STORE_ID: the ID of the Cloud Healthcare API FHIR R4 store.
    • CONFIGURABLE_SCHEMA_TRUE|FALSE: a boolean when set totrue lets you configure the data store schema using theschema.update method.
    • STATIC_INDEXING_TRUE|FALSE: a boolean when set totrue let you import historical data with higher indexing quota. This is useful when you expect your search app to encounter higher data volume. However, individual records take longer to be indexed. Google strongly recommends that you set this field totrue.

    Response

    You should receive a JSON response similar to the following. If the valuefor thedone key istrue, it indicates that theoperation to create the data store was completed. Otherwise, the data storecreation operation was unsuccessful.

    {  "name": "OPERATION_ID",  "done": true,  "response": {    "@type": "type.googleapis.com/google.cloud.discoveryengine.v1main.DataConnector"  }}
    • If the collection is successfully created, a data connector is added tothe list of data stores ontheData Stores page in the Google Cloud console.
    • The created data connector contains an entity, which has the same name asthe FHIR R4 store from which you're streaming the data.

What's next

Verify data store creation and FHIR data import

This task shows you how to verify whether a data store was createdsuccessfully and whether FHIR data was imported into the data store successfully.

  • In the Google Cloud console: Select the data store and verify its details.
  • Through the REST API:
    1. Use thedataStores.getmethod to get the healthcare data store details.
    2. Use theoperations.getmethod to get the details of the import operation.

To verify data store creation and data import, complete the following steps.

Console

  1. In the Google Cloud console, go to theAI Applications page.

    AI Applications

  2. In the navigation menu, clickData Stores.

    TheData Stores page displays a list of data stores in your Google Cloudproject with their details.

  3. Verify whether the data store or the data connector that you created is inthe data stores list. In the data stores list, a data connector that streamsdata contains a data store that has the same name as the Cloud Healthcare APIFHIR store.

  4. Select the data store or the data connector and verify its details.

    • For a data store:
      • The summary table lists the following details:
        • The data store ID, type, and region.
        • The number of documents indicating the number of FHIR resources imported.
        • The timestamp when the last document was imported.
        • Optionally, clickView details to see the document import details, suchas the details about a successful, partial, or failed import.
      • TheDocuments tab lists the resource IDs of the imported FHIR resourcesand their resource types in a paginated table. You can filter this table toverify whether a particular resource was imported.
      • TheActivity tab lists the document import details, such as the detailsabout a successful, partial, or failed import.
    • For a data connector:
      • The summary table lists the following details:
        • The collection ID, type, and region.
        • The name of the connected app.
        • The state of the connector, which is either active or paused.
      • TheEntities table shows the entity within the data connector.The entity's name is the source FHIR store name. The entity's ID isthe data connector's ID appended with the source FHIR store name.
        • Click the entity name to see its details. Because anentity is a data store instance within a data connector, theentity details are the same as a data store details.
  5. In theSchema tab, view the properties for the supported FHIR resourcesand elements. ClickEdit to configure the schema.

REST

  1. Verify the data store creation.

    curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json"\ -H "X-Goog-User-Project:PROJECT_ID" \ "https://us-discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/DATA_STORE_ID"

    Replace the following:

    • PROJECT_ID: the ID of your Google Cloud project.
    • DATA_STORE_ID: the ID of the Vertex AI Search data store.

    Response

    You should receive a JSON response similar to the following. The responsecontains details of the created data store.

    {  "name": "projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/DATA_STORE_ID",  "displayName": "DATA_STORE_DISPLAY_NAME",  "industryVertical": "HEALTHCARE_FHIR",  "createTime": "DATA_STORE_CREATION_TIMESTAMP",  "solutionTypes": [    "SOLUTION_TYPE_SEARCH"  ],  "defaultSchemaId": "default_schema",  "documentProcessingConfig": {    "defaultParsingConfig": {      "ocrParsingConfig": {}    }  }}
  2. Verify whether the FHIR data import operation is complete.

    curl -X GET \-H "Authorization: Bearer $(gcloud auth print-access-token)" \"https://us-discoveryengine.googleapis.com/v1/projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/DATA_STORE_ID/branches/0/operations/IMPORT_OPERATION_ID"

    Replace the following:

    • PROJECT_ID: the ID of your Google Cloud project.
    • DATA_STORE_ID: the ID of the Vertex AI Search data store.
    • IMPORT_OPERATION_ID: the operation ID of the long-running operation that's returned when you call theimport method

    Response

    You should receive a JSON response similar to the following.The import operation is a long-running operation. While the operation isrunning, the response contains the following fields:

    • successCount: indicates the number of FHIR resources that were imported successfully so far.
    • failureCount: indicates the number of FHIR resources that failed to be imported so far. This field is displayed only if there are FHIR resources that failed to be imported.

    When the operation is complete, the response contains the following fields:

    • successCount: indicates the number of FHIR resources that were imported successfully.
    • failureCount: indicates the number of FHIR resources that failed to be imported. This field is displayed only if there are any FHIR resources that failed to be imported.
    • totalCount: indicates the number of FHIR resources that are present in the source FHIR store. This field is displayed only if there are any FHIR resources that failed to be imported.
    • done: has the valuetrue to indicate that the import operation is complete.
    • errorSamples: provides information about the resources that failed to be imported. This field is displayed only if there are any FHIR resources that failed to be imported.
    • errorConfig: provides a path to a Cloud Storage location that contains the error summary log file.
    { "name": "projects/PROJECT_ID/locations/us/collections/default_collection/dataStores/DATA_STORE_ID/branches/0/operations/IMPORT_OPERATION_ID", "metadata": {   "@type": "type.googleapis.com/google.cloud.discoveryengine.v1.ImportDocumentsMetadata",   "createTime": "START_TIMESTAMP",   "updateTime": "END_TIMESTAMP",   "successCount": "SUCCESS_COUNT",   "failureCount": "FAILURE_COUNT",   "totalCount": "TOTAL_COUNT", }, "done": true, "response": {   "@type": "type.googleapis.com/google.cloud.discoveryengine.v1.ImportDocumentsResponse",  "errorSamples": [ERROR_SAMPLE],  "errorConfig": {     "gcsPrefix": "LOG_FILE_LOCATION"   } }}

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.