This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Note
Access to this page requires authorization. You can trysigning in orchanging directories.
Access to this page requires authorization. You can trychanging directories.
Get started with Azure Arc-enabled Kubernetes by using Azure CLI or Azure PowerShell to connect an existing Kubernetes cluster to Azure Arc.
For a conceptual look at connecting clusters to Azure Arc, seeAzure Arc-enabled Kubernetes agent overview. To try things out in a sample/practice experience, visit theAzure Arc Jumpstart.
Important
In addition to these prerequisites, be sure to meet allnetwork requirements for Azure Arc-enabled Kubernetes.
An Azure account with an active subscription.Create an account for free.
A basic understanding ofKubernetes core concepts.
Anidentity (user or service principal) which can be used tolog in to Azure CLI and connect your cluster to Azure Arc.
The latest version ofAzure CLI.
The latest version ofconnectedk8s Azure CLI extension, installed by running the following command:
az extension add --name connectedk8sAn up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options:
Self-managed Kubernetes cluster usingCluster API
Note
The cluster needs to have at least one node of operating system and architecture typelinux/amd64 and/orlinux/arm64. SeeCluster requirements for more about ARM64 scenarios.
At least 850 MB free for the Arc agents that will be deployed on the cluster, and capacity to use approximately 7% of a single CPU.
Akubeconfig file and context pointing to your cluster. For more information, seeConfigure access to multiple clusters.
Enter the following commands:
az provider register --namespace Microsoft.Kubernetesaz provider register --namespace Microsoft.KubernetesConfigurationaz provider register --namespace Microsoft.ExtendedLocationMonitor the registration process. Registration may take up to 10 minutes.
az provider show -n Microsoft.Kubernetes -o tableaz provider show -n Microsoft.KubernetesConfiguration -o tableaz provider show -n Microsoft.ExtendedLocation -o tableOnce registered, you should see theRegistrationState state for these namespaces change toRegistered.
Run the following command:
az group create --name AzureArcTest --location EastUS --output tableOutput:
Location Name---------- ------------eastus AzureArcTestRun the following command to connect your cluster. This command deploys the Azure Arc agents to the cluster and installs Helm v. 3.6.3 to the.azure folder of the deployment machine. This Helm 3 installation is only used for Azure Arc, and it doesn't remove or change any previously installed versions of Helm on the machine.
In this example, the cluster's name is AzureArcTest1.
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTestOutput:
Helm release deployment succeeded { "aadProfile": { "clientAppId": "", "serverAppId": "", "tenantId": "" }, "agentPublicKeyCertificate": "xxxxxxxxxxxxxxxxxxx", "agentVersion": null, "connectivityStatus": "Connecting", "distribution": "gke", "id": "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/AzureArcTest/providers/Microsoft.Kubernetes/connectedClusters/AzureArcTest1", "identity": { "principalId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "tenantId": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx", "type": "SystemAssigned" }, "infrastructure": "gcp", "kubernetesVersion": null, "lastConnectivityTime": null, "location": "eastus", "managedIdentityCertificateExpirationTime": null, "name": "AzureArcTest1", "offering": null, "provisioningState": "Succeeded", "resourceGroup": "AzureArcTest", "tags": {}, "totalCoreCount": null, "totalNodeCount": null, "type": "Microsoft.Kubernetes/connectedClusters" }Tip
The above command without the location parameter specified creates the Azure Arc-enabled Kubernetes resource in the same location as the resource group. To create the Azure Arc-enabled Kubernetes resource in a different location, specify either--location <region> or-l <region> when running theaz connectedk8s connect command.
Important
If deployment fails due to a timeout error, see ourtroubleshooting guide for details on how to resolve this issue.
If your cluster is behind an outbound proxy server, requests must be routed via the outbound proxy server.
On the deployment machine, set the environment variables needed for Azure CLI to use the outbound proxy server:
export HTTP_PROXY=<proxy-server-ip-address>:<port>export HTTPS_PROXY=<proxy-server-ip-address>:<port>export NO_PROXY=<cluster-apiserver-ip-address>:<port>On the Kubernetes cluster, run the connect command with theproxy-https andproxy-http parameters specified. If your proxy server is set up with both HTTP and HTTPS, be sure to use--proxy-http for the HTTP proxy and--proxy-https for the HTTPS proxy. If your proxy server only uses HTTP, you can use that value for both parameters.
az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR> --proxy-cert <path-to-cert-file>Note
--proxy-skip-range parameter can be used to specify the CIDR range and endpoints in a comma-separated way so that any communication from the agents to these endpoints do not go via the outbound proxy. At a minimum, the CIDR range of the services in the cluster should be specified as value for this parameter. For example, let's saykubectl get svc -A returns a list of services where all the services have ClusterIP values in the range10.0.0.0/16. Then the value to specify for--proxy-skip-range is10.0.0.0/16,kubernetes.default.svc,.svc.cluster.local,.svc.--proxy-http,--proxy-https, and--proxy-skip-range are expected for most outbound proxy environments.--proxy-cert isonly required if you need to inject trusted certificates expected by proxy into the trusted certificate store of agent pods.For outbound proxy servers, if you're only providing a trusted certificate, you can runaz connectedk8s connect with just the--proxy-cert parameter specified:
az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-cert <path-to-cert-file>If there are multiple trusted certificates, then the certificate chain (Leaf cert, Intermediate cert, Root cert) needs to be combined into a single file which is passed in the--proxy-cert parameter.
Note
--custom-ca-cert is an alias for--proxy-cert. Either parameter can be used interchangeably. Passing both parameters in the same command will honor the one passed last.Run the following command:
az connectedk8s list --resource-group AzureArcTest --output tableOutput:
Name Location ResourceGroup------------- ---------- ---------------AzureArcTest1 eastus AzureArcTestFor help troubleshooting connection problems, seeDiagnose connection issues for Azure Arc-enabled Kubernetes clusters.
Note
After onboarding the cluster, it takes up to ten minutes for cluster metadata (such as cluster version and number of nodes) to appear on the overview page of the Azure Arc-enabled Kubernetes resource in the Azure portal.
Azure Arc-enabled Kubernetes deploys several agents into theazure-arc namespace.
View these deployments and pods using:
kubectl get deployments,pods -n azure-arcVerify all pods are in aRunning state.
Output:
NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cluster-metadata-operator 1/1 1 1 13d deployment.apps/clusterconnect-agent 1/1 1 1 13d deployment.apps/clusteridentityoperator 1/1 1 1 13d deployment.apps/config-agent 1/1 1 1 13d deployment.apps/controller-manager 1/1 1 1 13d deployment.apps/extension-manager 1/1 1 1 13d deployment.apps/flux-logs-agent 1/1 1 1 13d deployment.apps/kube-aad-proxy 1/1 1 1 13d deployment.apps/metrics-agent 1/1 1 1 13d deployment.apps/resource-sync-agent 1/1 1 1 13d NAME READY STATUS RESTARTS AGE pod/cluster-metadata-operator-9568b899c-2stjn 2/2 Running 0 13d pod/clusterconnect-agent-576758886d-vggmv 3/3 Running 0 13d pod/clusteridentityoperator-6f59466c87-mm96j 2/2 Running 0 13d pod/config-agent-7cbd6cb89f-9fdnt 2/2 Running 0 13d pod/controller-manager-df6d56db5-kxmfj 2/2 Running 0 13d pod/extension-manager-58c94c5b89-c6q72 2/2 Running 0 13d pod/flux-logs-agent-6db9687fcb-rmxww 1/1 Running 0 13d pod/kube-aad-proxy-67b87b9f55-bthqv 2/2 Running 0 13d pod/metrics-agent-575c565fd9-k5j2t 2/2 Running 0 13d pod/resource-sync-agent-6bbd8bcd86-x5bk5 2/2 Running 0 13dFor more information about these agents, seeAzure Arc-enabled Kubernetes agent overview.
You can delete the Azure Arc-enabled Kubernetes resource, any associated configuration resources, and any agents running on the cluster by using the following command:
az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTestIf the deletion process fails, use the following command to force deletion (adding-y if you want to bypass the confirmation prompt):
az connectedk8s delete -n AzureArcTest1 -g AzureArcTest --forceThis command can also be used if you experience issues when creating a new cluster deployment (due to previously created resources not being completely removed).
Note
Deleting the Azure Arc-enabled Kubernetes resource using the Azure portal removes any associated configuration resources, butdoes not remove any agents running on the cluster. Because of this, we recommend deleting the Azure Arc-enabled Kubernetes resource usingaz connectedk8s delete rather than deleting the resource in the Azure portal.
Was this page helpful?
Need help with this topic?
Want to try using Ask Learn to clarify or guide you through this topic?
Was this page helpful?
Want to try using Ask Learn to clarify or guide you through this topic?