Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Proxyless Security Mesh End-to-End Tests

License

NotificationsYou must be signed in to change notification settings

grpc/psm-interop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Proxyless Security Mesh Interop Tests executed on Kubernetes.

Experimental

Work in progress. Internal APIs may and will change. Please refrain from makingchanges to this codebase at the moment.

Stabilization roadmap

  • Replace retrying with tenacity
  • Generate namespace for each test to prevent resource name conflicts andallow running tests in parallel
  • Security: run server and client in separate namespaces
  • Make framework.infrastructure.gcp resourcesfirst-classcitizen, supportsimpler CRUD
  • Security: manageroles/iam.workloadIdentityUser role grant lifecycle fordynamically-named namespaces
  • Restructureframework.test_app andframework.xds_k8s* into a modulecontaining xDS-interop-specific logic
  • Address inline TODOs in code
  • Improve README.md documentation, explain helpers in bin/ folder

Installation

Requirements

  1. Python v3.10+
  2. Google Cloud SDK
  3. kubectl

kubectl can be installed viagcloud components install kubectl, or system package manager:https://kubernetes.io/docs/tasks/tools/#kubectl

Python3 venv tool may need to be installed from APT on some Ubuntu systems:

sudo apt-get install python3-venv
Getting Started
  1. If you haven't,initialize gcloud SDK
  2. Activate gcloudconfiguration with your project
  3. Enable gcloud services:
    gcloud servicesenable \  artifactregistry.googleapis.com \  compute.googleapis.com \  container.googleapis.com \  logging.googleapis.com \  monitoring.googleapis.com \  networksecurity.googleapis.com \  networkservices.googleapis.com \  secretmanager.googleapis.com \  trafficdirector.googleapis.com

Configure GKE cluster

This is an example outlining minimal requirements to run thebaseline tests.Update gloud sdk:

gcloud -q components update

Pre-populate environment variables for convenience. To find project id, refer toIdentifying projects.

export PROJECT_ID="your-project-id"export PROJECT_NUMBER=$(gcloud projects describe"${PROJECT_ID}" --format="value(projectNumber)")# Compute Engine default service accountexport GCE_SA="${PROJECT_NUMBER}-compute@developer.gserviceaccount.com"# The prefix to name GCP resources used by the frameworkexport RESOURCE_PREFIX="xds-k8s-interop-tests"# The zone name your cluster, f.e. xds-k8s-test-clusterexport CLUSTER_NAME="${RESOURCE_PREFIX}-cluster"# The zone of your cluster, f.e. us-central1-aexport ZONE="us-central1-a"# Dedicated GCP Service Account to use with workload identity.export WORKLOAD_SA_NAME="${RESOURCE_PREFIX}"export WORKLOAD_SA_EMAIL="${WORKLOAD_SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"
Create the cluster

Minimal requirements:VPC-nativecluster withWorkload Identity enabled.

gcloud container clusters create"${CLUSTER_NAME}" \ --scopes=cloud-platform \ --zone="${ZONE}" \ --enable-ip-alias \ --workload-pool="${PROJECT_ID}.svc.id.goog" \ --workload-metadata=GKE_METADATA \ --tags=allow-health-checks

For security tests you also need to create CAs and configure the cluster to use those CAsas describedhere.

Create the firewall rule

Allowhealth checking mechanismsto query the workloads health.
This step can be skipped, if the driver is executed with--ensure_firewall.

gcloud compute firewall-rules create"${RESOURCE_PREFIX}-allow-health-checks" \  --network=default --action=allow --direction=INGRESS \  --source-ranges="35.191.0.0/16,130.211.0.0/22" \  --target-tags=allow-health-checks \  --rules=tcp:8080-8100
Setup GCP Service Account

Create dedicated GCP Service Account to usewithworkload identity.

gcloud iam service-accounts create"${WORKLOAD_SA_NAME}" \  --display-name="xDS K8S Interop Tests Workload Identity Service Account"

Enable the service account toaccess the Traffic Director API.

gcloud projects add-iam-policy-binding"${PROJECT_ID}" \   --member="serviceAccount:${WORKLOAD_SA_EMAIL}" \   --role="roles/trafficdirector.client" \   --condition="None"
Allow access to images

The test framework needs read access to the client and server images and the bootstrapgenerator image. You may have these images in your project but if you want to use thesefrom the grpc-testing project you will have to grantthenecessary accessto these images. To grant access to images stored ingrpc-testing project GCR,run:

gcloud artifacts repositories add-iam-policy-binding"projects/grpc-testing/locations/us/repositories/psm-interop" \  --member="serviceAccount:${GCE_SA}" \  --role="roles/artifactregistry.reader" \  --condition=None
gcloud artifacts repositories add-iam-policy-binding"projects/grpc-testing/locations/us/repositories/trafficdirector" \  --member="serviceAccount:${GCE_SA}" \  --role="roles/artifactregistry.reader" \  --condition=None

If you getPERMISSION_DENIED, contact one of the repomaintainers.

Allow test driver to configure workload identity automatically

Test driver will automatically grantroles/iam.workloadIdentityUser toallow the Kubernetes service account to impersonate the dedicated GCP workloadservice account (corresponds to the step 5ofAuthenticating to Google Cloud).This action requires the test framework to haveiam.serviceAccounts.createpermission on the project.

If you're running test framework locally, and you haveroles/owner to yourproject,you can skip this step.
If you're configuring the test framework to run on a CI: useroles/owneraccount once to allow test framework to grantroles/iam.workloadIdentityUser.

# Assuming CI is using Compute Engine default service account.gcloud projects add-iam-policy-binding"${PROJECT_ID}" \  --member="serviceAccount:${GCE_SA}" \  --role="roles/iam.serviceAccountAdmin" \  --condition-from-file=<(cat<<-END---title: allow_workload_identity_onlydescription: Restrict serviceAccountAdmin to granting role iam.workloadIdentityUserexpression: |-  api.getAttribute('iam.googleapis.com/modifiedGrantsByRole', [])        .hasOnly(['roles/iam.workloadIdentityUser'])END)
Configure GKE cluster access
# Unless you're using GCP VM with preconfigured Application Default Credentials, acquire them for your usergcloud auth application-default login# Install authentication plugin for kubectl.# Details: https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gkegcloud components install gke-gcloud-auth-plugin# Configuring GKE cluster access for kubectlgcloud container clusters get-credentials"${CLUSTER_NAME}" --zone"${ZONE}"# Save generated kube context nameexport KUBE_CONTEXT="$(kubectl config current-context)"

Install python dependencies

# Create python virtual environmentpython3 -m venv venv# Activate virtual environment. ./venv/bin/activate# Install requirementspip install -r requirements.lock# Generate protospython -m grpc_tools.protoc --proto_path=. \  --python_out=. --grpc_python_out=. --pyi_out=. \  protos/grpc/testing/*.proto protos/grpc/testing/xdsconfig/*.proto

Basic usage

Local development

This test driver allows running tests locally against remote GKE clusters, rightfrom your dev environment. You need:

  1. Followinstallation instructions
  2. Authenticatedgcloud
  3. kubectl context (seeConfigure GKE cluster access)
  4. Run tests with--debug_use_port_forwarding argument. The test driverwill automatically start and stop port forwarding usingkubectl subprocesses. (experimental)

Making changes to the driver

  1. Install additional dev packages:pip install -r requirements-dev.txt
  2. Use./bin/black.sh and./bin/isort.sh helpers to auto-format code.

Updating Python Dependencies

We track our Python-level dependencies using three different files:

  • requirements.txt
  • dev-requirements.txt
  • requirements.lock

requirements.txt lists modules without specific versions supplied, thoughversions ranges may be specified.requirements.lock is generated fromrequirements.txt anddoes specify versions for every dependency in thetransitive dependency tree.

When updatingrequirements.txt, you must also updaterequirements.lock. Todo this, navigate to this directory and run./bin/freeze.sh.

Setup test configuration

There are many arguments to be passed into the test run. You can save thearguments to a config file ("flagfile") for your development environment.Useconfig/local-dev.cfg.exampleas a starting point:

cp config/local-dev.cfg.example config/local-dev.cfg

If you exported environment variables in the above sections, you cantemplate them into the local config (note this recreates the config):

envsubst< config/local-dev.cfg.example> config/local-dev.cfg

Learn more about flagfiles inabseil documentation.

Test suites

See the full list of available test suites in thetests/ folder.

xDS Baseline Tests

Test suite meant to confirm that basic xDS features work as expected. Executingit before other test suites will help to identify whether test failure relatedto specific features under test, or caused by unrelated infrastructuredisturbances.

# Helppython -m tests.baseline_test --helppython -m tests.baseline_test --helpfull# Run the baseline test with local-dev.cfg settingspython -m tests.baseline_test --flagfile="config/local-dev.cfg"# Same as above, but using the helper script./run.sh tests/baseline_test.py

xDS Security Tests

Test suite meant to verify mTLS/TLS features. Note that this requiresadditional environment configuration. For more details, and for thesetup for the security tests, see"Setting up Traffic Director service security with proxyless gRPC" user guide.

# Run the security test with local-dev.cfg settingspython -m tests.security_test --flagfile="config/local-dev.cfg"# Same as above, but using the helper script./run.sh tests/security_test.py

Helper scripts

You can use interop xds-k8sbin/scripts to configure TD, start k8s instances step-by-step, and keep them alivefor as long as you need.

  • To run helper scripts using local config:
    • python -m bin.script_name --flagfile=config/local-dev.cfg
    • ./run.sh bin/script_name.py automatically appends the flagfile
  • Use--help to see script-specific argument
  • Use--helpfull to see all available argument

Overview

# Helper tool to configure Traffic Director with different security optionspython -m bin.run_td_setup --help# Helper tools to run the test server, client (with or without security)python -m bin.run_test_server --helppython -m bin.run_test_client --help# Helper tool to verify different security configurations via channelzpython -m bin.run_channelz --help

./run.sh helper

Use./run.sh to execute helper scripts and tests withconfig/local-dev.cfg.

USAGE: ./run.sh script_path [arguments]   script_path: path to python script to execute, relative to driver root folder   arguments ...: arguments passed to programin sys.argvENVIRONMENT:   XDS_K8S_CONFIG: file path to the config flagfile, relative to                   driver root folder. Default: config/local-dev.cfg                   Will be appended as --flagfile="config_absolute_path" argument   XDS_K8S_DRIVER_VENV_DIR: the path to python virtual environment directory                            Default:$XDS_K8S_DRIVER_DIR/venvDESCRIPTION:This tool performs the following:1) Ensures python virtual env installed and activated2) Exportstest driver rootin PYTHONPATH3) Automatically appends --flagfile="\$XDS_K8S_CONFIG" argumentEXAMPLES:./run.sh bin/run_td_setup.py --help./run.sh bin/run_td_setup.py --helpfullXDS_K8S_CONFIG=./path-to-flagfile.cfg ./run.sh bin/run_td_setup.py --resource_suffix=override-suffix./run.sh tests/baseline_test.py./run.sh tests/security_test.py --verbosity=1 --logger_levels=__main__:DEBUG,framework:DEBUG./run.sh tests/security_test.py SecurityTest.test_mtls --nocheck_local_certs

Partial setups

Regular workflow

# Setup Traffic Director./run.sh bin/run_td_setup.py# Start test server./run.sh bin/run_test_server.py# Add test server to the backend service./run.sh bin/run_td_setup.py --cmd=backends-add# Start test client./run.sh bin/run_test_client.py

Secure workflow

# Setup Traffic Director in mtls. See --help for all options./run.sh bin/run_td_setup.py --security=mtls# Start test server in a secure mode./run.sh bin/run_test_server.py --mode=secure# Add test server to the backend service./run.sh bin/run_td_setup.py --cmd=backends-add# Start test client in a secure more --mode=secure./run.sh bin/run_test_client.py --mode=secure

Sending RPCs

Start port forwarding

# Client: all services always on port 8079kubectl port-forward deployment.apps/psm-grpc-client 8079# Server regular mode: all grpc services on port 8080kubectl port-forward deployment.apps/psm-grpc-server 8080# OR# Server secure mode: TestServiceImpl is on 8080,kubectl port-forward deployment.apps/psm-grpc-server 8080# everything else (channelz, healthcheck, CSDS) on 8081kubectl port-forward deployment.apps/psm-grpc-server 8081

Send RPCs with grpccurl

# 8081 if security enabledexport SERVER_ADMIN_PORT=8080# List server services using reflectiongrpcurl --plaintext 127.0.0.1:$SERVER_ADMIN_PORT list# List client services using reflectiongrpcurl --plaintext 127.0.0.1:8079 list# List channels via channelzgrpcurl --plaintext 127.0.0.1:$SERVER_ADMIN_PORT grpc.channelz.v1.Channelz.GetTopChannelsgrpcurl --plaintext 127.0.0.1:8079 grpc.channelz.v1.Channelz.GetTopChannels# Send GetClientStats to the clientgrpcurl --plaintext -d'{"num_rpcs": 10, "timeout_sec": 30}' 127.0.0.1:8079 \  grpc.testing.LoadBalancerStatsService.GetClientStats

Cleanup

  • First, make sure to stop port forwarding, if any
  • Run./bin/cleanup.sh
Partial cleanup

You can run commands below to stop/start, create/delete resources however you want.
Generally, it's better to remove resources in the opposite order of their creation.

Cleanup regular resources:

# Cleanup TD resources./run.sh bin/run_td_setup.py --cmd=cleanup# Stop test client./run.sh bin/run_test_client.py --cmd=cleanup# Stop test server, and remove the namespace./run.sh bin/run_test_server.py --cmd=cleanup --cleanup_namespace

Cleanup regular and security-specific resources:

# Cleanup TD resources, with security./run.sh bin/run_td_setup.py --cmd=cleanup --security=mtls# Stop test client (secure)./run.sh bin/run_test_client.py --cmd=cleanup --mode=secure# Stop test server (secure), and remove the namespace./run.sh bin/run_test_server.py --cmd=cleanup --cleanup_namespace --mode=secure

In addition, here's some other helpful partial cleanup commands:

# Remove all backends from the backend services./run.sh bin/run_td_setup.py --cmd=backends-cleanup# Stop the server, but keep the namespace./run.sh bin/run_test_server.py --cmd=cleanup --nocleanup_namespace

Known errors

Error forwarding port

If you stopped a test withctrl+c, while using--debug_use_port_forwarding,you might see an error like this:

framework.infrastructure.k8s.PortForwardingError: Error forwarding port, unexpected output Unable to listen on port 8081: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 127.0.0.1:8081: bind: address already in use]

Unless you're runningkubectl port-forward manually, it's likely thatctrl+cinterrupted python before it could clean up subprocesses.

You can dops aux | grep port-forward and then kill the processes by id,or withkillall kubectl

About

Proxyless Security Mesh End-to-End Tests

Resources

License

Code of conduct

Stars

Watchers

Forks

Languages


[8]ページ先頭

©2009-2025 Movatter.jp