Ussuri Series Release Notes

6.2.2-39

Known Issues

  • When using a distribution with a recent SELinux release such as CentOS 8Stream, PING health-monitor does not work as shell_exec_t calls are deniedby SELinux.

  • Fixed configuration issue which allowed authenticated and authorizedusers to inject code into HAProxy configuration using API requests.Octavia API no longer accepts unencoded whitespace characters in url_path valuesin update requests for healthmonitors.

Upgrade Notes

  • The fix that updates the Netfilter Conntrack Sysfs variables requiresrebuilding the amphora image in order to be effective.

Security Issues

  • Filter out private information from the taskflow logs when ‘’INFO’’ levelmessages are enabled and when jobboard is enabled. Logs might have includedTLS certificates and private_key. By default, in Octavia only WARNING andabove messages are enabled in taskflow and jobboard is disabled.

Bug Fixes

  • Increased the TCP buffer memory maximum and enabled MTU ICMP black holedetection.

  • The generated RSyslog configuration on the amphora supports nowRSyslog failover with TCP if multiple RSyslog servers were specified.

  • In order to avoid hitting the Neutron API hardwhen batch update with creating many new members, we cache thesubnet validation results in batch update members API call.We also change to validate new members only during batch updatemembers since subnet ID is immutable.

  • Disable conntrack for TCP flows in the Amphora, it reduces memory usage forHAProxy-based listeners and prevents some kernel warnings about droppedpackets.

  • The parameters of a taskflow Flow were logged in ‘’INFO’’ level messages bytaskflow, it included TLS-enabled listeners and pools parameters, such ascertificates and private_key.

  • Fix an authentication error with Barbican when creating a TERMINATED_HTTPSlistener with application credential tokens or trust IDs.

  • Fix disabled UDP pools. Disabled UDP pools were marked as “OFFLINE” but therequests were still forwarded to the members of the pool.

  • Correctly detect the member operating status “drain” when querying statusdata from HAProxy.

  • Enable required SELinux booleans for CentOS or RHEL amphora image.

  • Fixed backwards compatibility issue with the feature that preserves HAProxyserver states between reloads.HAProxy version 1.5 or below do not support this feature, so Octaviawill not to activate it on amphorae with those versions.

  • Fix a bug that prevented the provisioning_state of a health-monitor to beset to ERROR when an error occurred while creating, updating or deleting ahealth-monitor.

  • Fix an issue with amphorav2 and persistence, some long tasks executed by acontroller might have been released in taskflow and rescheduled on anothercontroller. Octavia now ensures that a task is never released early byusing a keepalive mechanism to notify taskflow (and its redis backend) thata job is still running.

  • Fixed an issue with members in ERROR operating status that may have beenupdated briefly to ONLINE during a Load Balancer configuration change.

  • Fixed a potential error when plugging a member from a new network afterdeleting another member and unplugging its network. Octavia may have triedto plug the new network to a new interface but with an already existingname.This fix requires to update the Amphora image.

  • Netfilter Conntrack Sysfs variables net.netfilter.nf_conntrack_max andnf_conntrack_expect_max get set to sensible values on the amphora now.Previously, kernel default values were used which were much too low for theconfigured net.netfilter.nf_conntrack_buckets value. As a result packetscould get dropped because the conntrack table got filled too quickly. Notethat this affects only UDP and SCTP protocol listeners.Connection tracking is disabled for TCP-based connections on theamphora including HTTP(S).

  • Fix an issue with the provisioning status of a load balancer that was setto ERROR too early when an error occurred, making the load balancer mutablewhile the execution of the tasks for this resources haven’t finished yet.

  • Fix an issue that could set the provisioning status of a load balancer to aPENDING_UPDATE state when an error occurred in the amphora failover flow.

  • Fix a bug when updating a load balancer with a QoS policy after a failover,Octavia attempted to update the VRRP ports of the deleted amphorae, movingthe provisioning status of the load balancer to ERROR.

  • Fix a potential race condition when updating a resource in the amphorav2worker. The worker was not waiting for the resource to be set toPENDING_UPDATE, so the resource may have been updated with old data from thedatabase, resulting in a no-op update.

  • Fix an issue when Octavia performs a failover of an ACTIVE-STANDBY loadbalancer that has both amphorae missing.Some tasks in the controller took too much time to timeout because thetimeout value defined in[haproxy_amphora].active_connection_max_retries and[haproxy_amphora].active_connection_rety_interval was not used.

  • Fix a bug that could have triggered a race condition when configuring amember interface in the amphora. Due to a race condition, a networkinterface might have been deleted from the amphora, leading to a loss ofconnectivity.

  • Fixed “Could not retrieve certificate” error when updating/deleting theclient_ca_tls_container_ref field of a listener after a CA/CRL was deleted.

  • Fixed validations in L7 rule and session cookie APIs in order to preventauthenticated and authorized users to inject code into HAProxyconfiguration. CR and LF (\r and \n) are no longer allowed in L7 rulekeys and values. The session persistence cookie names must follow the rulesdescribed inhttps://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie.

  • Fix load balancers stuck in PENDING_UPDATE issues for some API calls (POST/l7rule, PUT /pool) when a provider denied the call.

  • Validate that the creation of L7 policies is compatible with the protocolof the listener in the Amphora driver. L7 policies are allowed forTerminated HTTPS or HTTP protocol listeners, but not for HTTPS, TCP or UDPprotocols listeners.

6.2.2

Bug Fixes

  • Fixes loadbalancer creation failure when one of the listener port matcheswith the octavia generated peer ports and the allowed_cidr is explicitlyset to 0.0.0.0/0 on the listener. This is due to creation of two securitygroup rules with remote_ip_prefix as None and remote_ip_prefix as 0.0.0.0/0which neutron rejects the second request with security group rule alreadyexists.

  • Fix a serialization error when using host_routes in VIP subnets whenpersistence in the amphorav2 driver is enabled.

  • Fixed MAX_TIMEOUT for timeout_client_data, timeout_member_connect,timeout_member_data, timeout_tcp_inspect API listener. The value wasreduced from 365 days to 24 days, which now does not exceed the value ofthe data type in DB.

  • Increase the limit value for nr_open and file-max in the amphora, the newvalue is based on what HAProxy 2.x is expecting from the system with thegreatest maxconn value that Octavia can set.

6.2.1

Bug Fixes

  • Fixed an issue with batch member updates, that don’t have any changes,not properly rolling back the update.

  • Fixed an issue that an amphorav2 LB cannot be reached after loadbalancerfailover. The LB security group was not set in the amphora port.

  • Fixes an issue where provider drivers may not decrement the loadbalancer objects quota on delete.

  • Fix an issue with the rsyslog configuration file in the Amphora when thelog offloading feature and the local log storage feature are both disabled.

  • Some IPv6 UDP members were incorrectly marked in ERROR status, because ofa formatting issue while generating the health message in the amphora.

  • Fixed an issue with thelo interface in theamphora-haproxy networknamespace. Thelo interface was down and prevented haproxy tocommunicate with other haproxy processes (for persistent stick tables) onconfiguration change. It delayed old haproxy worker cleanup and increasedthe memory consumption usage after reloading the configuration.

  • Fix load balancers that use customized host_routes in the VIP or the membersubnets in amphorav2.

  • Fix weighted round-robin for UDP listeners with keepalived and lvs.The algorithm must be specified as ‘wrr’ in order for weightedround-robin to work correctly, but was being set to ‘rr’.

  • Fixed the healthcheck endpoint always querying the backends by cachingresults for a configurable time. The default is five seconds.

  • Fix a bug that allowed a user to create a load balancer on avip_subnet_id that belongs to another user using the subnet UUID.

6.2.0

Bug Fixes

  • Fixes an issue with load balancer failover, when the VIP subnet is out ofIP addresses, that could lead to the VIP being deallocated.

  • Fixed an issue where members added to TLS-enabled pools would go to ERRORprovisioning status.

  • Fix default value override for timeout values for listeners. Changing thedefault timeouts in the configuration file wasn’t correctly applied in thedefault listener parameters.

  • Fix operational status for disabled UDP listeners. The operating status ofdisabled UDP listeners is now OFFLINE instead of ONLINE, the behavior is nowsimilary to the behavior of HTTP/HTTPS/TCP/… listeners.

  • Fixed an issue that could cause load balancers, with multiple amphorain a failed state, to be unable to complete a failover.

  • Fix an incorrectoperating_status with empty UDP pools. A UDP poolwithout any member is nowONLINE instead ofOFFLINE.

  • Add missing cloud-utils-growpart RPM to Red Hat based amphora images.

  • Add missing cronie RPM to Red Hat based amphora images.

  • Fix nf_conntrack_buckets sysctl in the Amphora, its value was incorrectlyset.

  • Fixed an issue were updating a CRL or client certificate on a pool wouldcause the pool to go into ERROR.

  • Fixed an issue where TLS-enabled pools would fail to provision.

  • Fix a potential invalid DOWN operating status for members of a UDP pool.A race condition could have occured when building the first heartbeatmessage after adding a new member in a pool, this recently added membercould have been seen as DOWN.

  • Add a validation step in the Octavia Amphora driver to ensure that theport_security_enabled parameter is set on the VIP network.

6.1.0

New Features

  • Add a new configuration option to define the default connection_limit fornew listeners that use the Amphora provider. The option is[haproxy_amphora].default_connection_limit and its default value is 50,000.This value is used when creating or setting a listener with -1 asconnection_limit parameter, or when unsetting connection_limit parameter.

Upgrade Notes

  • The failover improvements do not require an updated amphora image,but updating existing amphora will minimize the failoveroutage time for standalone amphora on subsequent failovers.

Security Issues

  • If you are using the admin_or_owner-policy.yaml policy override fileyou should upgrade your API processes to include the unscoped token fix.The default policies are not affected by this issue.

Bug Fixes

  • Fixed an issue with failing over an amphora if the pair amphora in anactive/standby pair had a missing VRRP port in neutron.

  • Fixed an issue where setting of SNI containers were not being applied onlistener update API calls.

  • Fixed an Octavia API validation on listener update where SNI containerscould be set on non-TERMINATED_HTTPS listeners.

  • Fixed an issue where some columns could not be used for sort keys inAPI list calls.

  • Fix an issue when the barbican service enable TLS, we create the listernerfailed.

  • Fixed an issue where amphora load balancers fail to create when Novaanti-affinity is enabled and topology is SINGLE.

  • Fixed an issue where listener “insert_headers” parameter was accepted forprotocols that do not support header insertion.

  • Fixed an issue where UDP only load balancers would not bring up the VIPaddress.

  • Fixes an issue when using the admin_or_owner-policy.yaml policy overridefile and unscoped tokens.

  • With haproxy 1.8.x releases, haproxy consumes much more memory in theamphorae because of pre-allocated data structures. This amount of memorydepends on the maxconn parameters in its configuration file (which isrelated to the connection_limit parameter in the Octavia API).In the Amphora provider, the default connection_limit value -1 isnow converted to a maxconn of 50,000. It was previously 1,000,000 but thatvalue triggered some memory allocation issues when quickly performingmultiple configuration updates in a load balancer.

  • Significantly improved the reliability and performance of amphoraand load balancer failovers. This is especially true when theNova service is experiencing failures.

6.0.1

Upgrade Notes

  • An amphora image update is recommended to pick up a workaround to anHAProxy issue where it would fail to reload on configuration change shouldthe local peer name start with “-x”.

Bug Fixes

  • Fixed an issue when a loadbalancer is disabled, Octavia Health Managerkeeps failovering the amphorae

  • Workaround an HAProxy issue where it would fail to reload on configurationchange should the local peer name start with “-x”.

6.0.0

New Features

  • Added the oslo-middleware healthcheck app to the Octavia API.Hitting /healthcheck will return a 200. This is enabled via the[api_settings]healthcheck_enabled setting and is disabled by default.

  • Operators can now use the amphorav2 provider which uses jobboard-basedcontroller. A jobboard controller solves the issue with resources stuck inPENDING_* states by writing info about task states in persistent backendand monitoring job claims via jobboard.

  • Add listener and pool protocol validation. The pool and listener can’t becombined arbitrarily. We need some constraints on the protocol side.

  • Added support for CentOS 8 amphora images.

  • Two new types of healthmonitoring are now valid for UDP listeners. BothHTTP andTCP check types can now be used.

  • Add an API for allowing administrators to manage Octavia AvailabilityZones and Availability Zone Profiles, which behave nearly identicallyto Flavors and Flavor Profiles.

  • Availability zone profiles can now override thevalid_vip_networksconfiguration option.

  • Added an option to the diskimage-create.sh script to specify the OctaviaGit branch to build the image from.

  • The load balancer create command now accepts an availability_zone argument.With the amphora driver this will create a load balancer in the targetedcompute availability_zone in nova.

    When using spare pools, it will create spares in each AZ. For the amphoradriver, if no[nova]availability_zone is configured and availabilityzones are used, results may be slightly unpredictable.

    Note (for theamphora driver): if it is possible for an amphora tochange availability zone after initial creation (not typically possiblewithout outside intervention) this may affect the ability of this featureto function properly.

Upgrade Notes

  • After this upgrade, users will no longer be able use network resources theycannot see or “show” on load balancers. Operators can revert this behaviorby setting the “allow_invisible_reourece_usage” configuration file settingtoTrue.

  • Any amphorae running a py3 based image must be recycled or else they willeventually fail on certificate rotation.

  • Python 2.7 support has been dropped. The minimum version of Python nowsupported by Octavia is Python 3.6.

  • A new amphora image is required to fix the potential certs-ramfs racecondition.

Security Issues

  • Previously, if a user knew or could guess the UUID for a network resource,they could use that UUID to create load balancer resources using that UUID.Now the user must have permission to see or “show” the resource before itcan be used with a load balancer. This will be the new default, butoperators can disable this behavior via the setting the configuration filesetting “allow_invisible_resource_usage” toTrue. This issue fallsunder the “Class C1” security issue as the user would require a valid UUID.

  • Correctly require two-way certificate authentication to connect to theamphora agent API (CVE-2019-17134).

  • A race condition between the certs-ramfs and the amphora agent may leadto tenant TLS content being stored on the amphora filesystem instead ofin the encrypted RAM filesystem.

Bug Fixes

  • Resolved broken certificate upload on py3 based amphora images. On ahousekeeping certificate rotation event, the amphora would clear out itsserver certificate and return a 500, putting the amphora in ERROR statusand breaking further communication. See upgrade notes.

  • Fixes an issue where load balancers with more than one TLS enabledlistener, one or more SNI enabled, may load certificates fromother TLS enabled listeners for SNI use.

  • Fixed a potential race condition with the certs-ramfs and amphora agentservices.

  • Fixes an issue where load balancers with more than one TLS enabledlistener, using client authentication and/or backend re-encryption,may load incorrect certificates for the listener.

  • Fix a bug that could interrupt resource creation when performing a gracefulshutdown of the house keeping service and leave resources such as amphoraein a BOOTING status.

  • Fixed an issue where load balancers would go into ERROR whensetting data not visible to providers (e.g. tags).

  • Fixes the ability to filter on the provider flavor capabilities API.

  • Fixed code that configured the CentOS/Red Hat amphora images to use thecorrect names for the network ‘ifcfg’ files for static routes androuting rules. It was using the wrong name for the routes file,and did not support IPv6 in either file. For more information, seehttps://storyboard.openstack.org/#!/story/2007051

  • Fix a bug that could interrupt resource creation when performing a gracefulshutdown of the controller worker and leave resources in aPENDING_CREATE/PENDING_UPDATE/PENDING_DELETE provisioning status. If theduration of an Octavia flow is greater than the ‘graceful_shutdown_timeout’configuration value, stopping the Octavia worker can still interrupt thecreation of resources.

  • Delay between checks on UDP healthmonitors was using the incorrect configvaluetimeout, when it should have beendelay.

Other Notes

  • Amphorae that are booting for a specific loadbalancer will now be linked tothat loadbalancer immediately upon creation. Previously this would nothappen until near the end of the process, leaving a gap during bootingduring which is was difficult to understand which booting amphora belongedto which loadbalancer. This was especially problematic when attempting totroubleshoot loadbalancers that entered ERROR status due to boot issues.