Enable faster network packet processing with DPDK

This document explains how to enable theData Plane Development Kit (DPDK) on avirtual machine (VM) instance for faster network packet processing.

DPDK is a framework for performance-intensive applications that require fastpacket processing, low latency, and consistent performance. DPDK provides a setof data plane libraries and a network interface controller (NIC) that bypass thekernel network and run directly in the user space. For example, enabling DPDK onyour VM is useful when running the following:

  • Network function virtualization (NFV) deployments

  • Software-defined networking (SDN) applications

  • Video streaming or voice over IP applications

You can run DPDK on a VM that uses one of the following virtual NIC (vNIC)types:

  • Recommended:gVNIC

    A high-performance, secure, and scalable virtual network interfacespecifically designed for Compute Engine that succeeds virtIO as thenext generation vNIC.

  • VirtIO-Net

    An open source ethernet driver that lets VMs efficiently access physicalhardwares, such as block storage and networking adapters.

One issue with running DPDK in a virtual environment, instead of on physicalhardware, is that virtual environments lack support for SR-IOV and I/O MemoryManagement Unit (IOMMU) for high-performing applications. To overcome thislimitation, you must run DPDK on guest physical addresses rather than hostvirtual addresses by using one of the following drivers:

Before you begin

Requirements

When creating a VM to run DPDK on, make sure of the following:

  • To avoid a lack of network connectivity when running your applications,use two Virtual Private Cloud networks:

    • A VPC network for the control plane

    • A VPC network for the data plane

  • The two VPC networks must both specify the following:

    • A subnet with a unique IP address range

    • The same region for their subnets

    • The same type of VNIC—either gVNIC or VirtIO-Net

  • When creating the VM:

    • You must specify the same region as the two VPC networks'subnets.

    • You must specify the vNIC type you plan to use with DPDK.

    • You must specify asupported machine seriesfor gVNIC or VirtIO-Net.

Restrictions

Running DPDK on a VM has the following restrictions:

Overview of DPDK features and versions

Google recommends using the most recent version of the DPDK driver to benefitfrom latest features and bug fixes. The following list provides an overviewof what is available with each version of the DPDK driver:

24.07
24.03
RSS support (for all supported machine types).
23.11
23.07
  • Added support for third generation and later machine types.
  • Jumbo frame (9K) support for first and second generation machine types.
23.03

Support for reporting stats:

  • Software stats for all supported machine types.
  • Hardware stats for first and second generation machine types.
22.11
Initial driver release with support for first and second generation machine types.

Configure a VM to run DPDK

This section explains how to create a VM to run DPDK on.

Create the VPC networks

Create two VPC networks, for the data plane and control plane,by using the Google Cloud console, Google Cloud CLI, or Compute Engine API.You can later specify these networks when creating the VM.

Console

  1. Create a VPC network for the data plane:

    1. In the Google Cloud console, go toVPC networks.

      Go to VPC networks

      TheVPC networks page opens.

    2. ClickCreate VPC network.

      TheCreate a VPC network page opens.

    3. In theName field, enter a name for your network.

    4. In theNew subnet section, do the following:

      1. In theName field, enter a name for your subnet.

      2. In theRegion menu, select a region for your subnet.

      3. SelectIPv4 (single-stack) (default).

      4. In theIPv4 range, enter avalid IPv4 range address in CIDRnotation.

      5. ClickDone.

    5. ClickCreate.

      TheVPC networks page opens. It can take up to a minute before thecreation of the VPC network completes.

  2. Create a VPC network for the control plane with afirewall rule to allow SSH connections into the VM:

    1. ClickCreate VPC network again.

      TheCreate a VPC network page opens.

    2. In theName field, enter a name for your network.

    3. In theNew subnet section, do the following:

      1. In theName field, enter a name for the subnet.

      2. In theRegion menu, select the same region you specified forthe subnet of the data plane network.

      3. SelectIPv4 (single-stack) (default).

      4. In theIPv4 range, enter avalid IPv4 range address in CIDRnotation.

        Important: Specify a different IPv4 range than the one you specified in the subnet for the data plane network. Otherwise, creating the network fails.
      5. ClickDone.

    4. In theIPv4 firewall rules tab, select theNETWORK_NAME-allow-ssh checkbox.

      WhereNETWORK_NAME is the network name you specifiedin the previous steps.

    5. ClickCreate.

      TheVPC networks page opens. It can take up to a minute before thecreation of the VPC network completes.

gcloud

  1. To create a VPC network for the data plane, follow thesesteps:

    1. Create a VPC network with a manually-created subnetby using thegcloud compute networks create commandwith the--subnet-mode flag set tocustom.

      gcloud compute networks createDATA_PLANE_NETWORK_NAME \    --bgp-routing-mode=regional \    --mtu=MTU \    --subnet-mode=custom

      Replace the following:

      • DATA_PLANE_NETWORK_NAME: the name for theVPC network for the data plane.

      • MTU: the maximum transmission unit (MTU), whichis the largest packet size of the network. The value must be between1300 and8896. The default value is1460. Before setting theMTU to a value higher than1460, seeMaximum transmission unit.

    2. Create a subnet for the VPC data plane network you'vejust created by using thegcloud compute networks subnets create command.

      gcloud compute networks subnets createDATA_PLANE_SUBNET_NAME \    --network=DATA_PLANE_NETWORK_NAME \    --range=DATA_PRIMARY_RANGE \    --region=REGION

      Replace the following:

      • DATA_PLANE_SUBNET_NAME: the name of the subnetfor the data plane network.

      • DATA_PLANE_NETWORK_NAME: the name of the dataplane network you specified in the previous steps.

      • DATA_PRIMARY_RANGE: avalid IPv4 range for the subnet inCIDR notation.

      • REGION: the region where to create the subnet.

  2. To create a VPC network for the control plane with afirewall rule to allow SSH connections into the VM, follow these steps:

    1. Create a VPC network with a manually-created subnetby using thegcloud compute networks create commandwith the--subnet-mode flag set tocustom.

      gcloud compute networks createCONTROL_PLANE_NETWORK_NAME \    --bgp-routing-mode=regional \    --mtu=MTU \    --subnet-mode=custom

      Replace the following:

      • CONTROL_PLANE_NETWORK_NAME: the name for theVPC network for the control plane.

      • MTU: the MTU, which is the largest packet size ofthe network. The value must be between1300 and8896. Thedefault value is1460. Before setting the MTU to a value higherthan1460, seeMaximum transmission unit.

    2. Create a subnet for the VPC control plane networkyou've just created by using thegcloud compute networks subnets create command.

      gcloud compute networks subnets createCONTROL_PLANE_SUBNET_NAME \    --network=CONTROL_PLANE_NETWORK_NAME \    --range=CONTROL_PRIMARY_RANGE \    --region=REGION

      Replace the following:

    3. Create a VPC firewall rule that allows SSH connectionsto the control plane network by using thegcloud compute firewall-rules create commandwith the--allow flag set totcp:22.

      gcloud compute firewall-rules createFIREWALL_RULE_NAME \    --action=allow \    --network=CONTROL_PLANE_NETWORK_NAME \    --rules=tcp:22

      Replace the following:

      • FIREWALL_RULE_NAME: the name of the firewallrule.

      • CONTROL_PLANE_NETWORK_NAME: the name of thecontrol plane network you created in the previous steps.

API

  1. To create a VPC network for the data plane, follow thesesteps:

    1. Create a VPC network with a manually-created subnetby making aPOST request to thenetworks.insert methodwith theautoCreateSubnetworks field set tofalse.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{  "autoCreateSubnetworks": false,  "name": "DATA_PLANE_NETWORK_NAME",  "mtu":MTU}

      Replace the following:

      • PROJECT_ID: the project ID of the currentproject.

      • DATA_PLANE_NETWORK_NAME: the name for the networkfor the data plane.

      • MTU: the maximum transmission unit (MTU), whichis the largest packet size of the network. The value must be between1300 and8896. The default value is1460. Before setting theMTU to a value higher than1460, seeMaximum transmission unit.

    2. Create a subnet for the VPC data plane network bymaking aPOST request to thesubnetworks.insert method.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks{  "ipCidrRange": "DATA_PRIMARY_RANGE",  "name": "DATA_PLANE_SUBNET_NAME",  "network": "projects/PROJECT_ID/global/networks/DATA_PLANE_NETWORK_NAME"}

      Replace the following:

      • PROJECT_ID: the project ID of the project wherethe data plane network is located.

      • REGION: the region where you want to create thesubnet.

      • DATA_PRIMARY_RANGE: the primaryIPv4 rangefor the new subnet in CIDR notation.

      • DATA_PLANE_SUBNET_NAME: the name of the subnetfor the data plane network you created in the previous step.

      • DATA_PLANE_NETWORK_NAME: the name of the dataplane network you created in the previous step.

  2. To create a VPC network for the control plane with afirewall rule to allow SSH connections to the VM, follow these steps:

    1. Create a VPC network with a manually-created subnetby making aPOST request to thenetworks.insert methodwith theautoCreateSubnetworks field set tofalse.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/networks{  "autoCreateSubnetworks": false,  "name": "CONTROL_PLANE_NETWORK_NAME",  "mtu":MTU}

      Replace the following:

      • PROJECT_ID: the project ID of the currentproject.

      • CONTROL_PLANE_NETWORK_NAME: the name for thenetwork for the control plane.

      • MTU: the MTU, which is the largest packet size ofthe network. The value must be between1300 and8896. Thedefault value is1460. Before setting the MTU to a value higherthan1460, seeMaximum transmission unit.

    2. Create a subnet for the VPC data control network bymaking aPOST request to thesubnetworks.insert method.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/regions/REGION/subnetworks{  "ipCidrRange": "CONTROL_PRIMARY_RANGE",  "name": "CONTROL_PLANE_SUBNET_NAME",  "network": "projects/PROJECT_ID/global/networks/CONTROL_PLANE_NETWORK_NAME"}

      Replace the following:

    3. Create a VPC firewall rule that allows SSH connectionsto the control plane network by making aPOST request to thefirewalls.insert method.In the request, set theIPProtocol field totcp and theportsfield to22.

      POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/global/firewalls{  "allowed": [    {      "IPProtocol": "tcp",      "ports": [ "22" ]    }  ],  "network": "projects/PROJECT_ID/global/networks/CONTROL_PLANE_NETWORK_NAME"}

      Replace the following:

      • PROJECT_ID: the project ID of the project wherethe control plane network is located.

      • CONTROL_PLANE_NETWORK_NAME: the name of thecontrol plane network you created in the previous steps.

For more configuration options when creating a VPC network, seeCreate and manage VPC networks.

Create a VM that uses the VPC networks for DPDK

Create a VM that enables gVNIC or virtIO-Net on the two VPCnetworks that you created previously by using the Google Cloud console,gcloud CLI, and Compute Engine API.

Recommended: SpecifyUbuntu LTS orUbuntu Pro as the operating systemimage because of their package manager support for the UIO and IOMMU-less VFIOdrivers. If you don't want to specify any of these operating systems, specifyingDebian 11 or later is recommended forfaster packet processing.

Console

Create a VM that uses the two VPC network subnets you createdin the previous steps by doing the following:

  1. In the Google Cloud console, go toVM instances.

    Go to VM instances

    TheVM instances page opens.

  2. ClickCreate instance.

    TheCreate an instance page opens.

  3. In theName field, enter a name for your VM.

  4. In theRegion menu, select the same region where you created yournetworks in the previous steps.

    Important: Attempting to create a VM in a different region from wherethe control and data plane networks exist causes errors.
  5. In theZone menu, select a zone for your VM.

  6. In theMachine configuration section, do the following:

    1. Select one of the following options:

    2. Optional. If you specifiedGPUs in the previous step and you wantto change the GPU to attach to the VM, do one or more of thefollowing:

      1. In theGPU type menu, select a type of GPU.

      2. In theNumber of GPUs menu, select the number of GPUs.

    3. In theSeries menu, select a machine series.

      Important: gVNIC is supported with all machine series, but VirtIO-Netis not supported with the newest machine series (third generation andT2A). If you choose a machine series that doesn't support your vNICtype, creating the VM fails.
    4. In theMachine type menu, select a machine type.

    5. Optional: ExpandAdvanced configurations, and follow the promptsto further customize the machine for this VM.

  7. Optional: In theBoot disk section, clickChange, and then followthe prompts to change the disk image.

    Important: If you specify gVNIC as the vNIC type for this VM, make sure tospecify asupported disk image.Otherwise, creating the VM fails.
  8. Expand theAdvanced options section.

  9. Expand theNetworking section.

  10. In theNetwork performance configuration section, do the following:

    1. In theNetwork interface card menu, select one of the following:

      • To use gVNIC, selectgVNIC.

      • To use VirtIO-Net, selectVirtIO.

      Note: The value- in theNetwork interface card menuindicates that the vNIC type can be either gVNIC or VirtIO-Netdepending on the machine family type. If both gVNIC and VirtIO-Netare available for a VM, the default is VirtIO-Net.
    2. Optional: For higher network performance and reduced latency, selecttheEnable Tier_1 networking checkbox.

      Important: You can only enable Tier_1 networking when you usegVNIC and specify asupported machine typethat has 30 vCPUs or more. Otherwise, creating the VM fails.
  11. In theNetwork interfaces section, do the following:

    1. In thedefault row, clickDelete item "default".

    2. ClickAdd network interface.

      TheNew network interface section appears.

    3. In theNetwork menu, select the control plane network you createdin the previous steps.

    4. ClickDone.

    5. ClickAdd network interface again.

      TheNew network interface section appears.

    6. In theNetwork menu, select the data plane network you created inthe previous steps.

    7. ClickDone.

  12. ClickCreate.

    TheVM instances page opens. It can take up to a minute before thecreation of the VM completes.

gcloud

Create a VM that uses the two VPC network subnets you createdin the previous steps by using thegcloud compute instances create commandwith the following flags:

gcloud compute instances createVM_NAME \    --image-family=IMAGE_FAMILY \    --image-project=IMAGE_PROJECT \    --machine-type=MACHINE_TYPE  \    --network-interface=network=CONTROL_PLANE_NETWORK_NAME,subnet=CONTROL_PLANE_SUBNET_NAME,nic-type=VNIC_TYPE \    --network-interface=network=DATA_PLANE_NETWORK_NAME,subnet=DATA_PLANE_SUBNET_NAME,nic-type=VNIC_TYPE \    --zone=ZONE

Replace the following:

For example, to create a VM nameddpdk-vm in theus-central1-a zone thatspecifies a SSD persistent disk of 512 GB, a predefined C2 machine type with60 vCPUs, Tier_1 networking, and a data plane and a control plane networkthat both use gVNIC, run the following command:

gcloud compute instances create dpdk-vm \    --boot-disk-size=512GB \    --boot-disk-type=pd-ssd \    --image-project=ubuntu-os-cloud \    --image-family=ubuntu-2004-lts \    --machine-type=c2-standard-60 \    --network-performance-configs=total-egress-bandwidth-tier=TIER_1 \    --network-interface=network=control,subnet=control,nic-type=GVNIC \    --network-interface=network=data,subnet=data,nic-type=GVNIC \    --zone=us-central1-a

API

Create a VM that uses the two VPC network subnets you createdin the previous steps by making aPOST request to theinstances.insert methodwith the following fields:

POST https://compute.googleapis.com/compute/v1/projects/PROJECT_ID/zones/ZONE/instances{  "name": "VM_NAME",  "machineType": "MACHINE_TYPE",  "disks": [    {      "initializeParams": {        "sourceImage": "projects/IMAGE_PROJECT/global/images/IMAGE_FAMILY"      }    }  ],  "networkInterfaces": [    {      "network": "global/networks/CONTROL_PLANE_NETWORK_NAME",      "subnetwork": "regions/REGION/subnetworks/CONTROL_PLANE_SUBNET_NAME",      "nicType": "VNIC_TYPE"    },    {      "network": "global/networks/DATAPLANE_NETWORK_NAME",      "subnetwork": "regions/REGION/subnetworks/DATA_PLANE_SUBNET_NAME",      "nicType": "VNIC_TYPE"    }  ]}

Replace the following:

For example, to create a VM nameddpdk-vm in theus-central1-a zone thatspecifies a SSD persistent disk of 512 GB, a predefined C2 machine type with60 vCPUs, Tier_1 networking, and a data plane and a control plane networkthat both use gVNIC, make the followingPOST request:

POST https://compute.googleapis.com/compute/v1/projects/example-project/zones/us-central1-a/instances{  "name": "dpdk-vm",  "machineType": "c2-standard-60",  "disks": [    {      "initializeParams": {        "diskSizeGb": "512GB",        "diskType": "pd-ssd",        "sourceImage": "projects/ubuntu-os-cloud/global/images/ubuntu-2004-lts"      },      "boot": true    }  ],  "networkInterfaces": [    {      "network": "global/networks/control",      "subnetwork": "regions/us-central1/subnetworks/control",      "nicType": "GVNIC"    },    {      "network": "global/networks/data",      "subnetwork": "regions/us-central1/subnetworks/data",      "nicType": "GVNIC"    }  ],  "networkPerformanceConfig": {    "totalEgressBandwidthTier": "TIER_1"  }}

For more configuration options when creating a VM, seeCreate and start a VM instance.

Install DPDK on your VM

To install DPDK on your VM, follow these steps:

  1. Connect to the VM you created in the previous section byusing SSH.

  2. Configure the dependencies for DPDK installation:

    sudo apt-get update && sudo apt-get upgrade -yqsudo apt-get install -yq build-essential ninja-build python3-pip \    linux-headers-$(uname -r) pkg-config libnuma-devsudo pip install pyelftools meson
  3. Install DPDK:

    wget https://fast.dpdk.org/rel/dpdk-23.07.tar.xztar xvf dpdk-23.07.tar.xzcd dpdk-23.07
    Important: If you specified gVNIC as the vNIC type in the previous steps,you must install DPDK version 22.11 or later. Using an earlier version ofDPDK causes errors when you try to test or use DPDK on your VM.
  4. To build DPDK with the examples:

    meson setup -Dexamples=all buildsudo ninja -C build install; sudo ldconfig

Install driver

To prepare DPDK to run on a driver, install the driver by selecting one of thefollowing methods:

Install a IOMMU-less VFIO

To install the IOMMU-less VFIO driver, follow these steps:

  1. Check if VFIO is enabled:

    cat /boot/config-$(uname -r) | grep NOIOMMU

    If VFIO isn't enabled, then follow the steps inInstall UIO.

  2. Enable theNo-IOMMU modein VFIO:

    sudo bash -c 'echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode'

Install UIO

To install the UIO driver on DPDK, select one of the following methods:

Install UIO by using git

To install the UIO driver on DPDK by usinggit, follow these steps:

  1. Clone theigb_uio git repositoryto a disk in your VM:

    git clone https://dpdk.org/git/dpdk-kmods
  2. From the parent directory of the cloned git repository, build the module andinstall the UIO driver on DPDK:

    pushd dpdk-kmods/linux/igb_uiosudo makesudo depmod && sudo insmod igb_uio.kopopd

Install UIO by using Linux packages

To install the UIO driver on DPDK by using Linux packages, follow these steps:

  1. Install thedpdk-igb-uio-dkms package:

    sudo apt-get install -y dpdk-igb-uio-dkms
  2. Install the UIO driver on DPDK:

    sudo modprobe igb_uio

Bind DPDK to a driver and test it

To bind DPDK to the driver you installed in the previous section, follow thesesteps:

  1. Get the Peripheral Component Interconnect (PCI) slot number for the currentnetwork interface:

    sudo lspci | grep -e "gVNIC" -e "Virtio network device"

    For example, if the VM is usingens4 as the network interface, the PCIslot number is00:04.0.

  2. Stop the network interface connected to the network adaptor:

    sudo ip link setNETWORK_INTERFACE_NAME down

    ReplaceNETWORK_INTERFACE_NAME with the name of thenetwork interface specified in the VPC networks. To see whichnetwork interface the VM is using, view the configuration of the networkinterface:

    sudo ifconfig
  3. Bind DPDK to the driver:

    sudo dpdk-devbind.py --bind=DRIVERPCI_SLOT_NUMBER

    Replace the following:

    • DRIVER: the driver to bind DPDK on. Specify one of thefollowing values:

      • UIO driver:igb_uio

      • IOMMU-less VFIO driver:vfio-pci

    • PCI_SLOT_NUMBER: the PCI slot number of the currentnetwork interface formatted as00:0NUMBER.0.

  4. Create the/mnt/huge directory, and then create some hugepages for DPDK touse for buffers:

    sudo mkdir /mnt/hugesudo mount -t hugetlbfs -o pagesize=1G none /mnt/hugesudo bash -c 'echo 4 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages'sudo bash -c 'echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages'
  5. Test that DPDK can use the network interface you created in the previoussteps by running thetestpmd example application that is included with theDPDK libraries:

    sudo ./build/app/dpdk-testpmd

    For more information about testing DPDK, seeTestpmd Command-line Options.

Unbind DPDK

After using DPDK, you can unbind it from the driver you've installed in theprevious section. To unbind DPDK, follow these steps:

  1. Unbind DPDK from the driver:

    sudo dpdk-devbind.py -uPCI_SLOT_NUMBER

    ReplacePCI_SLOT_NUMBER with the PCI slot number youspecified in the previous steps. If you want to verify the PCI slot numberfor the current network interface:

    sudo lspci | grep -e "gVNIC" -e "Virtio network device"

    For example, if the VM is usingens4 as the network interface, the PCIslot number is00:04.0.

  2. Reload the Compute Engine network driver:

    sudo bash -c 'echoPCI_SLOT_NUMBER > /sys/bus/pci/drivers/VNIC_DIRECTORY/bind'sudo ip link setNETWORK_INTERFACE_NAME up

    Replace the following:

    • PCI_SLOT_NUMBER: the PCI slot number you specifiedin the previous steps.

    • VNIC_DIRECTORY: the directory of the vNIC. Depending onthe vNIC type you're using, specify one of the following values:

      • gVNIC:gvnic

      • VirtIO-Net:virtio-pci

    • NETWORK_INTERFACE_NAME: the name of the networkinterface you specified in the previous section.

What's next

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2025-12-15 UTC.