gcloud transfer agents install Stay organized with collections Save and categorize content based on your preferences.
- NAME
- gcloud transfer agents install - install Transfer Service agents
- SYNOPSIS
gcloud transfer agents install--pool=POOL[--count=COUNT][--creds-file=CREDS_FILE][--docker-network=NETWORK][--[no-]enable-multipart][--id-prefix=ID_PREFIX][--logs-directory=LOGS_DIRECTORY; default="/tmp"][--memlock-limit=MEMLOCK_LIMIT; default=64000000][--mount-directories=[MOUNT-DIRECTORIES,…]][--proxy=PROXY][--s3-compatible-mode][--hdfs-namenode-uri=HDFS_NAMENODE_URI--hdfs-username=HDFS_USERNAME--hdfs-data-transfer-protection=HDFS_DATA_TRANSFER_PROTECTION][--kerberos-config-file=KERBEROS_CONFIG_FILE--kerberos-keytab-file=KERBEROS_KEYTAB_FILE--kerberos-user-principal=KERBEROS_USER_PRINCIPAL--kerberos-service-principal=KERBEROS_SERVICE_PRINCIPAL][GCLOUD_WIDE_FLAG …]
- DESCRIPTION
- Install Transfer Service agents to enable you to transfer data to or from POSIXfilesystems, such as on-premises filesystems. Agents are installed locally onyour machine and run inside Docker containers.
- EXAMPLES
- To create an agent pool for your agent, see the
gcloud transferagent-pools createcommand.To install an agent that authenticates with your user account credentials andhas default agent parameters, run:
gcloudtransferagentsinstall--pool=AGENT_POOLYou will be prompted to run a command to generate a credentials file if one doesnot already exist.
To install an agent that authenticates with a service account with credentialsstored at '/example/path.json', run:
gcloudtransferagentsinstall--creds-file=/example/path.json--pool=AGENT_POOLTo install an agent using service account impersonation, run:
gcloudtransferagentsinstall--creds-file=/example/path.json--pool=CUSTOM_AGENT_POOL--impersonate-service-account=impersonated-account@project-id.iam.gserviceaccount.comNote : The
--impersonate-service-accountflag only applies to theAPI calls made by gcloud during the agent installation and authorizationprocess. The impersonated credentials are not passed to the transfer agent'sruntime environment. The agent itself does not support impersonation and willuse the credentials provided via the--creds-fileflag or thedefault gcloud authenticated account for all of its operations. To grant theagent permissions, you must provide a service account key with the requireddirect roles (e.g., Storage Transfer Agent, Storage Object User) - REQUIRED FLAGS
--pool=POOL- The agent pool to associate with the newly installed agent. When creatingtransfer jobs, the agent pool parameter will determine which agents areactivated.
- FLAGS
--count=COUNT- Specify the number of agents to install on your current machine. Systemrequirements: 8 GB of memory and 4 CPUs per agent.
Note: If the 'id-prefix' flag is specified, Transfer Service increments a numbervalue after each prefix. Example: prefix1, prefix2, etc.
--creds-file=CREDS_FILE- Specify the path to the service account's credentials file.
No input required if authenticating with your user account credentials, whichTransfer Service will look for in your system.
Note that the credentials location will be mounted to the agent container.
--docker-network=NETWORK- Specify the network to connect the container to. This flag maps directly to the
--networkflag in the underlying 'docker run' command.If binding directly to the host's network is an option, then setting this valueto 'host' can dramatically improve transfer performance.
--[no-]enable-multipart- Split up files and transfer the resulting chunks in parallel before merging themat the destination. Can be used make transfers of large files faster as long asthe network and disk speed are not limiting factors. If unset, agent decideswhen to use the feature. Use
--enable-multipartto enable and--no-enable-multipartto disable. --id-prefix=ID_PREFIX- An optional prefix to add to the agent ID to help identify the agent.
--logs-directory=LOGS_DIRECTORY; default="/tmp"- Specify the absolute path to the directory you want to store transfer logs in.If not specified, gcloud transfer will mount your /tmp directory for logs.
--memlock-limit=MEMLOCK_LIMIT; default=64000000- Set the agent container's memlock limit. A value of 64000000 (default) or higheris required to ensure that agent versions 1.14 or later have enough lockedmemory to be able to start.
--mount-directories=[MOUNT-DIRECTORIES,…]- If you want to grant agents access to specific parts of your filesystem insteadof the entire filesystem, specify which directory paths to mount to the agentcontainer. Multiple paths must be separated by commas with no spaces (e.g.,--mount-directories=/system/path/to/dir1,/path/to/dir2). When mounting specificdirectories, gcloud transfer will also mount a directory for logs (either /tmpor what you've specified for --logs-directory) and your Google credentials filefor agent authentication.
It is strongly recommended that you use this flag. If this flag isn't specified,gcloud transfer will mount your entire filesystem to the agent container andgive the agent root access.
--proxy=PROXY- Specify the HTTP URL and port of a proxy server if you want to use a forwardproxy. For example, to use the URL 'example.com' and port '8080' specify'http://www.example.com:8080/'
Ensure that you specify the HTTP URL and not an HTTPS URL to avoiddouble-wrapping requests in TLS encryption. Double-wrapped requests prevent theproxy server from sending valid outbound requests.
--s3-compatible-mode- Allow the agent to work with S3-compatible sources. This flag blocks the agent'sability to work with other source types (e.g., file systems).
When using this flag, you must provide source credentials either as environmentvariables
AWS_ACCESS_KEY_IDandAWS_SECRET_ACCESS_KEYor as default credentials in your system's configuration files.To provide credentials as environment variables, run:
AWS_ACCESS_KEY_ID="id"AWS_SECRET_ACCESS_KEY="secret"gcloudtransferagentsinstall--s3-compatible-mode
- HDFS FLAGS
--hdfs-namenode-uri=HDFS_NAMENODE_URI- A URI representing an HDFS cluster including a schema, namenode, and port.Examples: "rpc://my-namenode:8020", "http://my-namenode:9870".
Use "http" or "https" for WebHDFS. If no schema is provided, the CLI assumesnative "rpc". If no port is provided, the default is 8020 for RPC, 9870 forHTTP, and 9871 for HTTPS. For example, the input "my-namenode" becomes"rpc://my-namenode:8020".
--hdfs-username=HDFS_USERNAME- Username for connecting to an HDFS cluster with simple auth.
--hdfs-data-transfer-protection=HDFS_DATA_TRANSFER_PROTECTION- Client-side quality of protection setting for Kerberized clusters. Client-sideQOP value cannot be more restrictive than the server-side QOP value.
HDFS_DATA_TRANSFER_PROTECTIONmust be one of:authentication,integrity,privacy.
- Kerberos FLAGS
--kerberos-config-file=KERBEROS_CONFIG_FILE- Path to Kerberos config file.
--kerberos-keytab-file=KERBEROS_KEYTAB_FILE- Path to a Keytab file containing the user principal specified with the--kerberos-user-principal flag.
--kerberos-user-principal=KERBEROS_USER_PRINCIPAL- Kerberos user principal to use when connecting to an HDFS cluster via Kerberosauth.
--kerberos-service-principal=KERBEROS_SERVICE_PRINCIPAL- Kerberos service principal to use, of the form"<primary>/<instance>". Realm is mapped from your Kerberos config.Any supplied realm is ignored. If not passed in, it will default to"hdfs/<namenode_fqdn>" (fqdn = fully qualified domain name).
- GCLOUD WIDE FLAGS
- These flags are available to all commands:
--access-token-file,--account,--billing-project,--configuration,--flags-file,--flatten,--format,--help,--impersonate-service-account,--log-http,--project,--quiet,--trace-token,--user-output-enabled,--verbosity.Run
$gcloud helpfor details. - NOTES
- This variant is also available:
gcloudalphatransferagentsinstall
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-09-09 UTC.