Create instances

MySQL  |  PostgreSQL  |  SQL Server

This page describes how to create a Cloud SQL for PostgreSQL instance.

For detailed information about all instance settings, seeInstance settings.

A newly-created instance has apostgres database.

The maximum number of instances you can have in a single project depends on thenetwork architectureof those instances:

  • New SQL network architecture: You can have up to 1000 instances per project.
  • Old SQL network architecture: You can have up to 100 instances per project.
  • Using both architectures: Your limit will be somewhere between 100 and 1000,depending on the distribution of your instances across the two architectures.

File a support caseto request an increase. Read replicas are counted as instances.

Note: This page contains features related toCloud SQL editions. For more information about Cloud SQL editions, seeIntroduction toCloud SQL editions.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Note: If you don't plan to keep the resources that you create in this procedure, create a project instead of selecting an existing project. After you finish these steps, you can delete the project, removing all resources associated with the project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Install thegcloud CLI.

  5. If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity.

  6. Toinitialize the gcloud CLI, run the following command:

    gcloudinit
  7. Make sure you have the Cloud SQL Admin and Compute Viewer roles on your user account.

    Go to the IAM page

    Learn more about roles and permissions.

Create a PostgreSQL instance

Important:For your Cloud SQL Enterprise Plus edition instance,Cloud SQL can generate a write endpoint automatically. For more informationabout this endpoint, including requirements for generating one automatically, seeGenerate the write endpoint.Tip: If you plan on using private networking, then youcan deploy both the private networking setup of your choice and the Cloud SQLinstance along with clients such as Compute Engine VMs by using Terraform.For more information, seeSimplified Cloud Networking Configuration Solutions.

Console

  1. In the Google Cloud console, go to theCloud SQL Instances page.

    Go to Cloud SQL Instances

  2. ClickCreate instance.
  3. On theChoose your database engine panel of theCreate an instance page, clickChoose PostgreSQL.
  4. In theChoose a Cloud SQL edition section of theCreate a SQL Server instance page, select the Cloud SQL edition for your instance:Enterprise orEnterprise Plus.

    Note: If the database version for your instance is PostgreSQL 16 or later, then the default Cloud SQL for PostgreSQL edition is Cloud SQL Enterprise Plus edition. If the database version is earlier than PostgreSQL 16, then the default edition is Cloud SQL Enterprise edition.

    For more information about Cloud SQL editions, seeIntroduction to Cloud SQL editions.

  5. Select the edition preset for your instance. To see the available presets, click theEdition preset menu.Note:To learn about how edition presets differ from one another, clickCompare edition presets.
  6. In theInstance info section, select the database version for your instance. To see the available versions, click theDatabase version menu.

    The database version can't be edited after the instance has been created.

    Note: Only PostgreSQL versions 12 and later are compatible with Cloud SQL Enterprise Plus edition.
  7. In theInstance ID field of theInstance info pane, enter an ID for your instance.
    Don't include sensitive or personally identifiable information in your instance name.

    You do not need to include the project ID in the instance name. This is done automatically whereappropriate (for example, in the log files).

  8. Enter a password for thepostgres user.
  9. To see the password in clear text, click theShow password icon.

    You can either enter the password manually or clickGenerate to have Cloud SQL create a password for you automatically.

  10. Optional: Configure a password policy for the instance as follows:
    1. Select theEnable password policies checkbox.Note: When you enable a password policy, statements that create users or change user passwords can cause additional latency due to password policy verification.
    2. Click theSet password policy button, set one or more of the following options, and clickSave.
      • Minimum length: Specifies the minimum number of characters that the password must have.
      • Password complexity: Checks if the password is a combination of lowercase, uppercase, numeric, and non-alphanumeric characters.
      • Restrict password reuse: Specifies the number of previous passwords that you can't reuse.
      • Disallow username: Prevents the use of the username in the password.
      • Set password change interval: Specifies the minimum number of hours after which you can change the password.
    3. Note: When you deselect theEnable password policies checkbox, the password policy parameters are reset.
  11. In theChoose region and zonal availability section, select the region and zone for your instance. Region availability might be different based on your Cloud SQL for PostgreSQL edition. For more information, seeAbout instance settings.

    Place your instance in the same region as the resources that access it. The region you select can't be modified in the future. In most cases, you don't need to specify a zone.

    Note:If there is a resource location constraint on your organization policy, you must select one of the regions that the organization policy allows. You see a message about Resource Location Restriction in theChoose region and zonal availability section if a constraint exists.Learn more.

    If you are configuring your instance forhigh availability, you can select both a primary and secondary zone.

    The following conditions apply when the secondary zone is used during instance creation:

    • The zones default toAny for the primary zone andAny (different from primary) for the secondary zone.
    • If both the primary and secondary zones are specified, they must be distinct zones.
  12. In theCustomize your instance section, update settings for your instance. Begin by clickingSHOW CONFIGURATION OPTIONS to display the groups of settings. Then, expand the groups you want to review and customize settings. ASummary of all the options you select is shown on the right. Customizing these instance settings is optional. Defaults are assigned in every case where no customizations are made.

    The following table is a quick reference to instance settings. For more details about each setting, see theinstance settings page.

    SettingNotes
    Machine type
    Machine typeSelect from Shared core or Dedicated core. For Shared core, each machine type is classified by the number of CPUs (cores) and amount of memory for your instance.
    CoresThe number of vCPUs for your instance.Learn more.
    MemoryThe amount of memory for your instance, in GBs.Learn more.
    CustomFor the Dedicated core machine type, instead of selecting a predefined configuration, select theCustom button to create an instance with a custom configuration. When you select this option, you need to select the number of cores and amount of memory for your instance.Learn more.
    Storage
    Storage typeDetermines whether your instance uses SSD or HDD storage.Learn more.
    Storage capacityThe amount of storage provisioned for the instance.Learn more.
    Enable automatic storage increasesDetermines whether Cloud SQL automatically provides more storage for your instance when free space runs low.Learn more.
    Encryption
    Google-managed encryptionThe default option.
    Customer key-managed encryption key (CMEK)Select to use your key with Google Cloud Key Management Service.Learn more.
    Connections
    Private IPAdds a private IP address for your instance. To enable connecting to the instance, additional configuration is required.
    Optionally, you can specify an allocated IP range for your instances to use for connections.
    1. ExpandShow allocated IP range option.
    2. Select an IP range from the drop-down menu.

    Your instance can have both a public and a private IP address.

    Note:Cloud SQL generates a write endpoint automatically for your Cloud SQL Enterprise Plus edition instance if you do the following:
    1. If you haven't already enabled the Cloud DNS API,enable the Cloud DNS API for your Google Cloud project.
    2. Enable the Cloud DNS API for your Google Cloud project (if this API isn't enabled).
    3. Add a private IP address to the instance.
    4. Specify an associated network for the instance.
    5. Optionally, specify an allocated IP range for the instance.
    Public IPAdds a public IP address for your instance. You can then add authorized networks to connect to the instance.

    Your instance can have both a public and a private IP address.

    Learn more about usingpublic IP.

    Authorized networks

    Add the name for the new network and the Network address.Learn more.

    Private path for Google Cloud services

    By selecting this check box, you allow other Google Cloud services, such as BigQuery, to access data in Cloud SQL and make queries against this data over a private connection.

    Note:This check box is enabled only if you select thePrivate IP check box, and you add or select an authorized network to create a private connection.

    Enable Managed Connection Pooling

    By selecting this checkbox, you enable Managed Connection Pooling for your instance. Managed Connection Pooling lets you scale your workloads by optimizing resource utilization and connection latency Cloud SQL instances using pooling and multiplexing. For more information about Managed Connection Pooling, seeManaged Connection Pooling overview.

    Security
    Server certificate authority mode

    Choose the type of certificate authority (CA) that signs the server certificate for this Cloud SQL instance.Learn more.

    By default, when you create an instance in Google Cloud console, the instance uses theGoogle managed internal certificate authority (GOOGLE_MANAGED_INTERNAL_CA), which is the per-instance CA option.

    Data protection
    Automate backupsThe window of time when you would like backups to start.Learn more.
    Choose where to store your backupsSelect Multi-region for most use cases. If you need to store backups in a specific region, for example, if there are regulatory reasons to do so, select Region and select your region from the Location drop-down menu.
    Choose how many automated backups to storeThe number of automated backups you would like to retain (from 1 to 365 days).Learn more.
    Enable point-in-time recoveryEnables point-in-time recovery and write-ahead logging.Learn more.Note:The following default behavior applies:
    Enable deletion protectionDetermines whether to protect an instance against accidental deletion.Learn more.
    Enable retained backups after instance deletionDetermines whether automated and on-demand backups are retained after an instance is deleted.Learn more.
    Choose how many days of logs to retainConfigure write-ahead log retention from 1 to 7 days. The default setting is 7 days.Learn more.
    Maintenance
    Preferred windowDetermines a one-hour window when Cloud SQL can perform disruptive maintenance on your instance. If you do not set the window, then disruptive maintenance can be done at any time.Learn more.
    Order of updatesYour preferred timing for instance updates, relative to other instances in the same project.Learn more.
    Flags
    ADD FLAGYou can use database flags to control settings and parameters for your instance.Learn more.
    Labels
    ADD LABELAdd a key and value for each label that you add. You use labels to help organize your instances.
    Data cache
    Enable data cache (optional)Enables data cache for Cloud SQL for PostgreSQL Enterprise Plus edition instances. For more information about data cache, seedata cache.
  13. ClickCreate Instance.

    Note: It might take a few minutes to create your instance. However, you canview information about the instance while it's being created.

gcloud

For information about installing and getting started with thegcloud CLI, seeInstalling gcloud CLI.For information about starting Cloud Shell, see theCloud Shell documentation.

  1. Use thegcloud sql instances create command to create the instance:

  2. For Cloud SQL Enterprise Plus edition instances:
    gcloudsqlinstancescreateINSTANCE_NAME\--database-version=DATABASE_VERSION\--region=REGION\--tier=TIER\--edition=ENTERPRISE_PLUS
    Note: If the database version for your instance is PostgreSQL 16 or later, then the default edition is Cloud SQL Enterprise Plus edition.
    For Cloud SQL Enterprise edition instances:
    gcloudsqlinstancescreateINSTANCE_NAME\--database-version=DATABASE_VERSION\--region=REGION\--cpu=NUMBER_CPUS\--memory=MEMORY_SIZE\--edition=ENTERPRISE
    Note: If you either don't specify a database version or you specify a version other than PostgreSQL 16 or later, then the default edition is Cloud SQL Enterprise edition.

    If you specify PostgreSQL 16 or later for the database version of your instance, but you create the instance in a region that doesn't haveregion support for Cloud SQL Enterprise Plus edition, then you must create an Cloud SQL Enterprise edition instance.

    Or, alternatively, you can use the--tier flag if you choosedb-f1-micro ordb-g1-small as the machine type:
    gcloudsqlinstancescreateINSTANCE_NAME\--tier=API_TIER_STRING\--region=REGION

    There are restrictions on the values for vCPUs and memory size:

    • vCPUs must be either 1 or an even number between 2 and 96.
    • Memory must be:
      • 0.9 to 6.5 GB per vCPU
      • A multiple of 256 MB
      • At least 3.75 GB (3840 MB)

    For example, the following command creates a Cloud SQL Enterprise edition instance with twovCPUs and 7,680 MB of memory:

    gcloudsqlinstancescreatemyinstance\--database-version=POSTGRES_16\--cpu=2\--memory=7680MB\--region=us-central1

    The following command creates a Cloud SQL Enterprise Plus edition instance with four cores:

    gcloudsqlinstancescreatemyinstance\--database-version=POSTGRES_16\--tier=db-perf-optimized-N-4\--edition=ENTERPRISE_PLUS\--region=us-central1
    SeeCustom instance configuration for more information about how to size--cpu and--memory.

    The default value forREGION isus-central1.

    Don't include sensitive or personally identifiable information in your instance name; it is externally visible.
    You do not need to include the project ID in the instance name. This is done automatically whereappropriate (for example, in the log files).

    If you are creating an instance forhigh availability, you can specify both the primary and secondary zones, using the--zone and--secondary-zone parameters. The following conditions apply when the secondary zone is used during instance creation or edit:

    • The zones must be valid zones.
    • If the secondary zone is specified, the primary must also be specified.
    • If the primary and secondary zones are specified, they must be distinct zones.
    • If the primary and secondary zones are specified, they must belong to the same region.

    You can add moreparametersto determine other instance settings:

    SettingParameterNotes
    Required parameters
    Database version--database-versionThedatabase version, which is based on your Cloud SQL edition.
    Region--region See valid values.Note:Some organizations use an organization policy to restrict resource locations. If this type of policy affects your project, you can only select regions the organization policy allows. In theLocation drop-down menu in the Console, the locations that are not allowed are unavailable.Learn more.
    Set password policy
    Enable password policy--enable-password-policyEnables the password policy when used. By default, the password policy is disabled. When disabled using the--clear-password-policy parameter, the other password policy parameters are reset.Note: When you enable a password policy, statements that create users or change user passwords can cause additional latency due to password policy verification.
    Minimum length--password-policy-min-lengthSpecifies the minimum number of characters that the password must have.
    Password complexity--password-policy-complexityEnables the password complexity check to ensure that the password contains one of each of these types of characters: lowercase, uppercase, numeric, and non-alphanumeric. Set the value toCOMPLEXITY_DEFAULT.
    Restrict password reuse--password-policy-reuse-intervalSpecifies the number of previous passwords that you can't reuse.
    Disallow username--password-policy-disallow-username-substringPrevents the use of the username in the password. Use the--no-password-policy-disallow-username-substring parameter to disable the check.
    Set password change interval--password-policy-password-change-intervalSpecifies the minimum duration after which you can change the password, for example, 2m for 2 minutes.
    Connectivity
    Private IP--network

    --no-assign-ip (optional)

    --allocated-ip-range-name (optional)

    --enable-google-private-path (optional)

    --network: Specifies the name of the VPC network you want to use for this instance. Private services access must already be configured for the network. Available only for the beta command (gcloud beta sql instances create).

    --no-assign-ip: Instance will only have a private IP address.

    --allocated-ip-range-name: If specified, sets a range name for which an IP range is allocated. For example,google-managed-services-default. The range name should comply withRFC-1035 and be within 1-63 characters. (gcloud alpha sql instances create).

    --enable-google-private-path: If you use this parameter, then you allow other Google Cloud services, such as BigQuery, to access data in Cloud SQL and make queries against this data over a private connection.

    This parameter is valid only if:

    • You use the--no-assign-ip parameter.
    • You use the--network parameter to specify the name of the VPC network that you want to use to create a private connection.

    Note:Cloud SQL generates a write endpoint automatically for your Cloud SQL Enterprise Plus edition instance if you do the following:
    1. Enable the Cloud DNS API for your Google Cloud project (if this API isn't enabled).
    2. Add a private IP address to the instance.
    3. Specify an associated network for the instance.
    4. Optionally, specify an allocated IP range for the instance.
    Public IP--authorized-networksFor public IP connections, only connections from authorized networks can connect to your instance.Learn more.
    SSL Enforcement

    --ssl-mode

    --require-ssl

    Thessl-mode parameter enforces the SSL/TLS enforcement for the connections. For more information, seeSettings for Cloud SQL for PostgreSQL.

    Therequire-ssl parameter determines whether SSL connections over IP are enforced or not.require-ssl is a legacy parameter. Usessl-mode instead. For more information, seeIpConfiguration.

    Server CA mode--server-ca-mode

    The--server-ca-mode flag configures the type ofserver certificate authority (CA) for an instance. You can select one of the following options:

    • GOOGLE_MANAGED_INTERNAL_CA: this is the default value. With this option, an internal CA dedicated to each Cloud SQL instance signs the server certificate for that instance.
    • GOOGLE_MANAGED_CAS_CA: with this option, a CA hierarchy consisting of a root CA and subordinate server CAs managed by Cloud SQL and hosted on Google Cloud Certificate Authority Service (CA Service) is used. The subordinate server CAs in a region sign the server certificates and are shared across instances in the region.
    • CUSTOMER_MANAGED_CAS_CA: with this option, you define the CA hierarchy and manage the rotation of the CA certificates. You create a CA pool in CA Service in the same region of your instance. One of the CAs in the pool is used to sign the server certificate. For more information, seeUse a customer-managed CA.
    Machine type and storage
    Machine type--tierUsed to specify a shared-core instance (db-f1-micro ordb-g1-small). For a custom instance configuration, use the--cpu or--memory parameters instead. SeeCustom instance configuration.
    Storage type--storage-typeDetermines whether your instance uses SSD or HDD storage.Learn more.
    Storage capacity--storage-sizeThe amount of storage provisioned for the instance, in GB.Learn more.
    Automatic storage increase--storage-auto-increaseDetermines whether Cloud SQL automatically provides more storage for your instance when free space runs low.Learn more.
    Automatic storage increase limit--storage-auto-increase-limitDetermines how large Cloud SQL can automatically grow storage. Available only for the beta command (gcloud beta sql instances create).Learn more.
    Data cache (optional)--enable-data-cacheEnables or deactivates the data cache for instances. For more information, seedata cache.
    Automatic backups and high availability
    High availability--availability-typeFor a highly-available instance, set toREGIONAL.Learn more.
    Secondary zone--secondary-zoneIf you're creating an instance forhigh availability, you can specify both the primary and secondary zones using the--zone and--secondary-zone parameters. The following restrictions apply when the secondary zone is used during instance creation or edit:
    • The zones must be valid zones.
    • If the secondary zone is specified, the primary must also be specified.
    • If the primary and secondary zones are specified, they must be distinct zones.

      If the primary and secondary zones are specified, they must belong to the same region.

    Automatic backups--backup-start-timeThe window of time when you would like backups to start.Learn more.
    Retention settings for automated backups--retained-backups-countThe number of automated backups to retain.Learn more.
    Retention settings for binary logging--retained-transaction-log-daysThe number of days to retain write-ahead logs for point-in-time recovery.Learn more.
    Point-in-time recovery--enable-point-in-time-recovery Enables point-in-time recovery and write-ahead logging.Learn more.Note:The following default behavior applies:
    Add database flags
    Database flags--database-flagsYou can use database flags to control settings and parameters for your instance.Learn more about database flags.
    Maintenance schedule
    Maintenance window--maintenance-window-day,
    --maintenance-window-hour
    Determines a one-hour window when Cloud SQL can perform disruptive maintenance on your instance. If you don't set the window, then disruptive maintenance can be done at any time.Learn more.
    Maintenance timing--maintenance-release-channelYour preferred timing for instance updates, relative to other instances in the same project. Usepreview for earlier updates, andproduction for later updates.Learn more.
    Integration with Vertex AI
    --enable-google-ml-integrationEnables Cloud SQL instances to connect to Vertex AI to pass requests forreal-time predictions and insights to the AI.
    --database-flags cloudsql.enable_google_ml_integration=onBy turning this flag on, Cloud SQL can integrate with Vertex AI.
    Custom SAN
    Add a custom subject alternative name (SAN)--custom-subject-alternative-names=DNS_NAMES

    If you want to use a custom DNS name to connect to a Cloud SQL instance instead of using an IP address, then configure the custom subject alternative name (SAN) setting while creating the instance. The custom DNS name that you insert into the custom SAN setting is added to the SAN field of the server certificate of the instance. This lets you use the custom DNS name with hostname validation securely.

    Before you can use the custom DNS name in your clients and applications, you must set up the mapping between the DNS name and the IP address. This is known asDNS resolution. You can add a comma-separated list of up to three custom DNS names to the custom SAN setting.

    Note: This feature is available forCUSTOMER_MANAGED_CAS_CA instances only. To create the instance, you must use thegcloud sql instances create command.
  3. Note the automatically assigned IP address.

    If you are not using the Cloud SQL Auth Proxy, you will use this address as the host address that your applications or tools use to connect to the instance.

  4. Set the password for thepostgres user:
    gcloudsqlusersset-passwordpostgres\--instance=INSTANCE_NAME\--password=PASSWORD

Terraform

To create an instance, use aTerraform resource.

resource "google_sql_database_instance" "postgres_pvp_instance_name" {  name             = "postgres-pvp-instance-name"  region           = "asia-northeast1"  database_version = "POSTGRES_14"  root_password    = "abcABC123!"  settings {    tier = "db-custom-2-7680"    password_validation_policy {      min_length                  = 6      reuse_interval              = 2      complexity                  = "COMPLEXITY_DEFAULT"      disallow_username_substring = true      password_change_interval    = "30s"      enable_password_policy      = true    }  }  # set `deletion_protection` to true, will ensure that one cannot accidentally delete this instance by  # use of Terraform whereas `deletion_protection_enabled` flag protects this instance at the GCP level.  deletion_protection = false}

Apply the changes

To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.

Prepare Cloud Shell

  1. LaunchCloud Shell.
  2. Set the default Google Cloud project where you want to apply your Terraform configurations.

    You only need to run this command once per project, and you can run it in any directory.

    export GOOGLE_CLOUD_PROJECT=PROJECT_ID

    Environment variables are overridden if you set explicit values in the Terraform configuration file.

Prepare the directory

Each Terraform configuration file must have its own directory (alsocalled aroot module).

  1. InCloud Shell, create a directory and a new file within that directory. The filename must have the.tf extension—for examplemain.tf. In this tutorial, the file is referred to asmain.tf.
    mkdirDIRECTORY && cdDIRECTORY && touch main.tf
  2. If you are following a tutorial, you can copy the sample code in each section or step.

    Copy the sample code into the newly createdmain.tf.

    Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.

  3. Review and modify the sample parameters to apply to your environment.
  4. Save your changes.
  5. Initialize Terraform. You only need to do this once per directory.
    terraform init

    Optionally, to use the latest Google provider version, include the-upgrade option:

    terraform init -upgrade

Apply the changes

  1. Review the configuration and verify that the resources that Terraform is going to create or update match your expectations:
    terraform plan

    Make corrections to the configuration as necessary.

  2. Apply the Terraform configuration by running the following command and enteringyes at the prompt:
    terraform apply

    Wait until Terraform displays the "Apply complete!" message.

  3. Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.
Note: Terraform samples typically assume that the required APIs are enabled in your Google Cloud project.

Delete the changes

To delete your changes, do the following:

  1. To disable deletion protection, in your Terraform configuration file set thedeletion_protection argument tofalse.
    deletion_protection =  "false"
  2. Apply the updated Terraform configuration by running the following command and enteringyes at the prompt:
    terraform apply
  1. Remove resources previously applied with your Terraform configuration by running the following command and enteringyes at the prompt:

    terraform destroy

REST v1

Create the instance

This example creates an instance. Some optional parameters, such as backups and binary logging are also included. For a complete list of parameters for this call, see theInstances:insert page. For information about instance settings, including valid values for region, seeInstance settings.

Don't include sensitive or personally identifiable information in your instance ID; it is externally visible.
You do not need to include the project ID in the instance name. This is done automatically whereappropriate (for example, in the log files).

Before using any of the request data, make the following replacements:

Note: For theipv4Enabled parameter, set the value totrue if you're using a public IP address for your instance orfalse if your instance has a private IP address.

If you set theenablePrivatePathForGoogleCloudServices parameter totrue, then you allow other Google Cloud services, such as BigQuery, to access data in Cloud SQL and make queries against this data over a private connection. By setting this parameter tofalse, other Google Cloud services can't access data in Cloud SQL over a private connection.

To set a password policy while creating an instance, include thepasswordValidationPolicy object in the request. Set the following parameters, as required:

To create the instance so that it canintegrate with Vertex AI, include theenableGoogleMlIntegration object in the request. This integration lets you apply large language models (LLMs), which are hosted in Vertex AI, to a Cloud SQL for PostgreSQL database.

Set the following parameters, as required:

  • enableGoogleMlIntegration: when this parameter is set totrue, Cloud SQL instances can connect to Vertex AI to pass requests forreal-time predictions and insights to the AI
  • cloudsql.enable_google_ml_integration: when this parameter is set toon, Cloud SQL can integrate with Vertex AI

HTTP method and URL:

POST https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances

Request JSON body:

{  "name": "INSTANCE_ID",  "region": "REGION",  "databaseVersion": "DATABASE_VERSION",  "rootPassword": "PASSWORD",  "settings": {    "tier": "MACHINE_TYPE",    "edition": "EDITION_TYPE",    "enableGoogleMlIntegration": "true" | "false"    "databaseFlags":    [      {        "name": "cloudsql.enable_google_ml_integration",        "value": "on" | "off"      }    ]    "dataCacheConfig": {      "dataCacheEnabled":DATA_CACHE_ENABLED    },    "backupConfiguration": {      "enabled": true    },    "passwordValidationPolicy": {      "enablePasswordPolicy": true      "minLength": "MIN_LENGTH",      "complexity": COMPLEXITY_DEFAULT,      "reuseInterval": "REUSE_INTERVAL",      "disallowUsernameSubstring": "DISALLOW_USERNAME_SUBSTRING",      "passwordChangeInterval": "PASSWORD_CHANGE_INTERVAL"    }    "ipConfiguration": {      "privateNetwork": "PRIVATE_NETWORK",      "authorizedNetworks": [AUTHORIZED_NETWORKS],      "ipv4Enabled": false,      "enablePrivatePathForGoogleCloudServices": true,      "serverCaMode": "CA_MODE",      "customSubjectAlternativeNames": "DNS_NAMES"    }  }}

To send your request, expand one of these options:

curl (Linux, macOS, or Cloud Shell)

Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

Save the request body in a file namedrequest.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances"

PowerShell (Windows)

Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

Save the request body in a file namedrequest.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{  "kind": "sql#operation",  "targetLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/instances/INSTANCE_ID",  "status": "PENDING",  "user": "user@example.com",  "insertTime": "2019-09-25T22:19:33.735Z",  "operationType": "CREATE",  "name": "OPERATION_ID",  "targetId": "INSTANCE_ID",  "selfLink": "https://sqladmin.googleapis.com/v1/projects/PROJECT_ID/operations/OPERATION_ID",  "targetProject": "PROJECT_ID"}

The response is along-running operation, which might take a few minutes to complete.

Retrieve the IPv4 address

Retrieve the automatically assigned IPv4 address for the new instance:

Before using any of the request data, make the following replacements:

  • project-id: your project ID
  • instance-id: instance ID created in prior step

HTTP method and URL:

GET https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id

To send your request, expand one of these options:

curl (Linux, macOS, or Cloud Shell)

Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

Execute the following command:

curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id"

PowerShell (Windows)

Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

Execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{  "kind": "sql#instance",  "state": "RUNNABLE",  "databaseVersion": "MYSQL_8_0_18",  "settings": {    "authorizedGaeApplications": [],    "tier": "db-f1-micro",    "kind": "sql#settings",    "pricingPlan": "PER_USE",    "replicationType": "SYNCHRONOUS",    "activationPolicy": "ALWAYS",    "ipConfiguration": {      "authorizedNetworks": [],      "ipv4Enabled": true    },    "locationPreference": {      "zone": "us-west1-a",      "kind": "sql#locationPreference"    },    "dataDiskType": "PD_SSD",    "backupConfiguration": {      "startTime": "18:00",      "kind": "sql#backupConfiguration",      "enabled": true,      "binaryLogEnabled": true    },    "settingsVersion": "1",    "storageAutoResizeLimit": "0",    "storageAutoResize": true,    "dataDiskSizeGb": "10"  },  "etag": "--redacted--",  "ipAddresses": [    {      "type": "PRIMARY",      "ipAddress": "10.0.0.1"    }  ],  "serverCaCert": {    ...  },  "instanceType": "CLOUD_SQL_INSTANCE",  "project": "project-id",  "serviceAccountEmailAddress": "redacted@gcp-sa-cloud-sql.iam.gserviceaccount.com",  "backendType": "SECOND_GEN",  "selfLink": "https://sqladmin.googleapis.com/v1/projects/project-id/instances/instance-id",  "connectionName": "project-id:region:instance-id",  "name": "instance-id",  "region": "us-west1",  "gceZone": "us-west1-a"}

Look for theipAddress field in the response.

REST v1beta4

Create the instance

This example creates an instance. Some optional parameters, such as backups and binary logging are also included. For a complete list of parameters for this call, see theinstances:insert page. For information about instance settings, including valid values for region, seeInstance settings

Don't include sensitive or personally identifiable information in your instance ID; it is externally visible.
You do not need to include the project ID in the instance name. This is done automatically whereappropriate (for example, in the log files).

Before using any of the request data, make the following replacements:

Note: For theipv4Enabled parameter, set the value totrue if you're using a public IP address for your instance orfalse if your instance has a private IP address.

If you set theenablePrivatePathForGoogleCloudServices parameter totrue, then you allow other Google Cloud services, such as BigQuery, to access data in Cloud SQL and make queries against this data over a private connection. By setting this parameter tofalse, other Google Cloud services can't access data in Cloud SQL over a private connection.

Note:Cloud SQL generates a write endpoint automatically for your Cloud SQL Enterprise Plus edition instance if you do the following:
  1. Enable the Cloud DNS API for your Google Cloud project (if this API isn't enabled).
  2. Add a private IP address to the instance.
  3. Specify an associated network for the instance.
  4. Optionally, specify an allocated IP range for the instance.

To set a password policy while creating an instance, include thepasswordValidationPolicy object in the request. Set the following parameters, as required:

To create the instance so that it canintegrate with Vertex AI, include theenableGoogleMlIntegration object in the request. This integration lets you apply large language models (LLMs), which are hosted in Vertex AI, to a Cloud SQL for PostgreSQL database.

Set the following parameters, as required:

  • enableGoogleMlIntegration: when this parameter is set totrue, Cloud SQL instances can connect to Vertex AI to pass requests forreal-time predictions and insights to the AI
  • cloudsql.enable_google_ml_integration: when this parameter is set toon, Cloud SQL can integrate with Vertex AI

HTTP method and URL:

POST https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances

Request JSON body:

{  "name": "INSTANCE_ID",  "region": "REGION",  "databaseVersion": "DATABASE_VERSION",  "rootPassword": "PASSWORD",  "settings": {    "tier": "MACHINE_TYPE",    "edition": "EDITION_TYPE",    "enableGoogleMlIntegration": "true" | "false"    "databaseFlags":    [      {        "name": "cloudsql.enable_google_ml_integration",        "value": "on" | "off"      }    ]    "dataCacheConfig": {      "dataCacheEnabled":DATA_CACHE_ENABLED    },    "backupConfiguration": {      "enabled": true    },    "passwordValidationPolicy": {      "enablePasswordPolicy": true      "minLength": "MIN_LENGTH",      "complexity": COMPLEXITY_DEFAULT,      "reuseInterval": "REUSE_INTERVAL",      "disallowUsernameSubstring": "DISALLOW_USERNAME_SUBSTRING",      "passwordChangeInterval": "PASSWORD_CHANGE_INTERVAL"    }    "ipConfiguration": {      "privateNetwork": "PRIVATE_NETWORK",      "authorizedNetworks": [AUTHORIZED_NETWORKS],      "ipv4Enabled": false,      "enablePrivatePathForGoogleCloudServices": true,      "serverCaMode": "CA_MODE",      "customSubjectAlternativeNames": "DNS_NAMES"    }  }}

To send your request, expand one of these options:

curl (Linux, macOS, or Cloud Shell)

Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

Save the request body in a file namedrequest.json, and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances"

PowerShell (Windows)

Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

Save the request body in a file namedrequest.json, and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{  "kind": "sql#operation",  "targetLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/instances/INSTANCE_ID",  "status": "PENDING",  "user": "user@example.com",  "insertTime": "2020-01-01T19:13:21.834Z",  "operationType": "CREATE",  "name": "OPERATION_ID",  "targetId": "INSTANCE_ID",  "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/PROJECT_ID/operations/OPERATION_ID",  "targetProject": "PROJECT_ID"}

The response is along-running operation, which might take a few minutes to complete.

Retrieve the IPv4 address

Retrieve the automatically assigned IPv4 address for the new instance:

Before using any of the request data, make the following replacements:

  • project-id: your project ID
  • instance-id: instance ID created in prior step

HTTP method and URL:

GET https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id

To send your request, expand one of these options:

curl (Linux, macOS, or Cloud Shell)

Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login , or by usingCloud Shell, which automatically logs you into thegcloud CLI . You can check the currently active account by runninggcloud auth list.

Execute the following command:

curl -X GET \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
"https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id"

PowerShell (Windows)

Note: The following command assumes that you have logged in to thegcloud CLI with your user account by runninggcloud init orgcloud auth login . You can check the currently active account by runninggcloud auth list.

Execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method GET `
-Headers $headers `
-Uri "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id" | Select-Object -Expand Content

You should receive a JSON response similar to the following:

{  "kind": "sql#instance",  "state": "RUNNABLE",  "databaseVersion": "MYSQL_8_0_18",  "settings": {    "authorizedGaeApplications": [],    "tier": "db-f1-micro",    "kind": "sql#settings",    "pricingPlan": "PER_USE",    "replicationType": "SYNCHRONOUS",    "activationPolicy": "ALWAYS",    "ipConfiguration": {      "authorizedNetworks": [],      "ipv4Enabled": true    },    "locationPreference": {      "zone": "us-west1-a",      "kind": "sql#locationPreference"    },    "dataDiskType": "PD_SSD",    "backupConfiguration": {      "startTime": "18:00",      "kind": "sql#backupConfiguration",      "enabled": true,      "binaryLogEnabled": true    },    "settingsVersion": "1",    "storageAutoResizeLimit": "0",    "storageAutoResize": true,    "dataDiskSizeGb": "10"  },  "etag": "--redacted--",  "ipAddresses": [    {      "type": "PRIMARY",      "ipAddress": "10.0.0.1"    }  ],  "serverCaCert": {    ...  },  "instanceType": "CLOUD_SQL_INSTANCE",  "project": "project-id",  "serviceAccountEmailAddress": "redacted@gcp-sa-cloud-sql.iam.gserviceaccount.com",  "backendType": "SECOND_GEN",  "selfLink": "https://sqladmin.googleapis.com/sql/v1beta4/projects/project-id/instances/instance-id",  "connectionName": "project-id:region:instance-id",  "name": "instance-id",  "region": "us-west1",  "gceZone": "us-west1-a"}

Look for theipAddress field in the response.

To see how theunderlying REST API requestis constructed for this task, see theAPIs Explorer on the instances:insert page.

Generate the write endpoint

A write endpoint is a global domain name service (DNS) name that resolves to theIP address of the current primary instance automatically. This endpoint redirectsincoming connections to the new primary instance automatically in case of a replicafailover or switchoveroperation. You can use the write endpoint in a SQL connection string instead ofan IP address. By using a write endpoint, you can avoid having to makeapplication connection changes when a region outage occurs.

For more information about using a write endpoint to connect to an instance, seeConnect to an instance using a write endpoint.

Custom instance configurations

Determines memory and virtual cores available for your Cloud SQLinstance. Machine types are part of a machine series, and machine seriesavailability is determined by your Cloud SQL edition.

For Cloud SQL Enterprise Plus edition instances, Cloud SQLoffers predefined machine types for your instances intheN2 andC4Amachine series.

For Cloud SQL Enterprise edition instances, Cloud SQL offerspredefined and custom machine types.

If you require real-time processing, such as online transactionprocessing (OLTP), make sure that your instance has enough memory to containthe entire working set. However, there are other factors that can impactmemory requirements, such as number of active connections, and internaloverhead processes. Perform load testing to avoid performanceissues in your production environment.

When you configure your instance, select sufficient memory and vCPUs to handleyour needs, and scale up your instance as your requirements increase. A machine configurationwith insufficient vCPUs might lose its SLA coverage. For more information,seeOperational guidelines.

To learn more about the machine types and machine series availablefor your Cloud SQL instance, seeMachine series overview.

Tip: If you plan on using private networking,then you can deploy both the private networking setup of your choice and theCloud SQL instance by using Terraform.

For more information, see
Cloud SQL Simplified Networking.

Troubleshoot

IssueTroubleshooting
Error message:Failed to create subnetwork. Couldn't find free blocks in allocated IP ranges. Please allocate new ranges for this service provider.There are no more available addresses in the allocated IP range. There can be several possible scenarios:

  • The size of the allocated IP range for the private service connection is smaller than /24.
  • The size of the allocated IP range for the private service connection is too small for the number of Cloud SQL instances.
  • The requirement on the size of allocated IP range will be larger if instances are created in multiple regions. Seeallocated range size

To resolve this issue, you can either expand the existing allocated IP range or allocate an additional IP range to the private service connection. For more information, seeAllocate an IP address range.

If you used the--allocated-ip-range-name flag while creating the Cloud SQL instance, you may only expand the specified IP range.

If you're allocating a new range, take care that the allocation doesn't overlap with any existing allocations.

After creating a new IP range, update the vpc peering with the following command:

gcloudservicesvpc-peeringsupdate\--service=servicenetworking.googleapis.com\--ranges=OLD_RESERVED_RANGE_NAME,NEW_RESERVED_RANGE_NAME\--network=VPC_NETWORK\--project=PROJECT_ID\--force

If you're expanding an existing allocation, take care to increase only the allocation range and not decrease it. For example, if the original allocation was 10.0.10.0/24, then make the new allocation at least 10.0.10.0/23.

In general, if starting from a /24 allocation, decrementing the /mask by 1 for each condition (additional instance type group, additional region) is a good rule of thumb. For example, if trying to create both instance type groups on the same allocation, going from /24 to /23 is enough.

After expanding an existing IP range, update the vpc peering with following command:

gcloudservicesvpc-peeringsupdate\--service=servicenetworking.googleapis.com\--ranges=RESERVED_RANGE_NAME\--network=VPC_NETWORK\--project=PROJECT_ID
Error message:Failed to create subnetwork. Router status is temporarily unavailable. Please try again later. Help Token:[token-ID].Try to create the Cloud SQL instance again.
Error message:HTTPError 400: Invalid request: Incorrect Service Networking config for instance:PROJECT_ID:INSTANCE_NAME:SERVICE_NETWORKING_NOT_ENABLED.

Enable theService Networking API using the following command and try to create the Cloud SQL instance again.

gcloudservicesenableservicenetworking.googleapis.com\--project=PROJECT_ID
Error message:Failed to create subnetwork. Required 'compute.projects.get' permission forPROJECT_ID.When you create an instance using with a Private IP address, a service account is created just-in-time using the Service Networking API. If you have only recently enabled the Service Networking API, then the service account might not get created and the instance creation fails. In this case, you must wait for the service account to propagate throughout the system or manually add it with the required permissions.
Error message:More than 3 subject alternative names are not allowed.You're trying to use a custom SAN to add more than three DNS names to the server certificate of a Cloud SQL instance. You can't add more than three DNS names to the instance.
Error message:Subject alternative names %s is too long. The maximum length is 253 characters.Make sure that any DNS names that you want to add to the server certificate of a Cloud SQL instance don't have more than 253 characters.
Error message:Subject alternative name %s is invalid.

Verify that the DNS names that you want to add to the server certificate of a Cloud SQL instance meet the following criteria:

  • They don't have wildcard characters.
  • They don't have trailing dots.
  • They meetRFC 1034 specifications.

What's next

  1. Create a PostgreSQL database on the instance.
  2. Create PostgreSQL users on the instance.
  3. Secure and control access to the instance.
  4. Connect to the instance with a PostgreSQL client.
  5. Import data into the database.
  6. Learn about instance settings.

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-07-18 UTC.