Migrate from Kafka to Pub/Sub Lite Stay organized with collections Save and categorize content based on your preferences.
- Current customers: Pub/Sub Lite remains functional until March 18, 2026.
If you have not used Pub/Sub Lite within the 90-day period preceding July 15, 2025 (April 15, 2025 - July 15, 2025), you won't be able to access Pub/Sub Lite starting on July 15, 2025. - New customers: Pub/Sub Lite is no longer available for new customers after September 24, 2024.
You can migrate your Pub/Sub Lite service toGoogle Cloud Managed Service for Apache Kafka orPub/Sub.
This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of theService Specific Terms. Pre-GA features are available "as is" and might have limited support. For more information, see thelaunch stage descriptions.
This document is useful if you're considering migrating from self-managedApache Kafka to Pub/Sub Lite.
Key Point: Review Pub/Sub Lite features, pricing, and usecases as compared to Kafka, and then prepare for the migration.Overview of Pub/Sub Lite
Pub/Sub Lite is a high-volume messaging service built forlow cost of operation. Pub/Sub Lite offers Zonal and RegionalStorage along with pre-provisioned capacity. Within Pub/Sub Lite,you can choose zonal or regional Lite topics. Regional Lite topics offerthe same availability SLA as Pub/Sub topics. However, there arereliability differences between Pub/Sub andPub/Sub Lite in terms ofmessage replication.
To learn more about Pub/Sub and Pub/Sub Lite,seeWhat is Pub/Sub.
To learn more about Lite-supported regions and zones, seePub/Sub Lite locations.
Terminology in Pub/Sub Lite
The following are some key terms for Pub/Sub Lite.
Message. Data that moves through the Pub/Sub Lite service.
Topic. A named resource that represents a feed of messages. WithinPub/Sub Lite, you can choose to create a zonal or regionalLite topic. Pub/Sub Lite regional topics store data intwo zones of a single region. Pub/Sub Lite zonal topicsreplicate data within just one zone.
Reservation. A named pool of throughput capacity shared by multipleLite topics in a region.
Subscription A named resource that represents an interest inreceiving messages from a particular Lite topic. A subscription is similarto a consumer group in Kafka that only connects to a single topic.
Subscriber. A client of Pub/Sub Lite that receives messagesfrom a Lite topic and on a specified subscription. A subscriptioncan have multiple subscriber clients. In such a case, the messages areload-balanced across the subscriber clients. In Kafka, a subscriberis called a consumer.
Publisher. An application that creates messages and sends (publishes)them to a specific Lite topic. A topic can have multiple publishers.In Kafka, a publisher is called a producer.
Differences between Kafka and Pub/Sub Lite
While Pub/Sub Lite is conceptually similar to Kafka,it's a different system with a narrower API that is more focused ondata ingestion. While the differences are immaterial forstream ingestion and processing, there are some specific use cases wherethese differences are important.
Kafka as a database
Unlike Kafka, Pub/Sub Lite doesn't currently supporttransactional publishing or log compaction, although idempotence is supported.These Kafka features are more useful when you use Kafka as a database than as amessaging system. If you use Kafka primarily as a database, consider runningyour own Kafka cluster or using a managed Kafka solution such asConfluent Cloud.If neither of these solutions are an option, you can also consider usinga horizontally scalable database such asCloud Spanner.
Kafka streams
Kafka streams is a data processing system built on top of Kafka. While it does allowinjection of consumer clients, it requires access to all administratoroperations. Kafka Streams also uses the transactional database propertiesof Kafka for storing internal metadata. So, Pub/Sub Litecannot currently be used for Kafka Streams applications.
Apache Beam is a similar streaming data processing system which is integrated with Kafka,Pub/Sub, and Pub/Sub Lite.You can run Beam pipelines in a fully-managed way withDataflow, or on your preexistingApache Flink andApache Spark clusters.
Monitor
Kafka clients can read server-side metrics. In Pub/Sub Lite,metrics relevant to publisher and subscriber behavior are managedthroughCloud Monitoringwith no additional configuration.
Capacity management
The capacity of a Kafka topic is determined by the capacity of the cluster.Replication, key compaction, and batch settings determine the capacity requiredto service any given topic on the Kafka cluster. The throughput of a Kafkatopic is limited by the capacity of the machines on which the brokersare running. By contrast, you must define both storage and throughputcapacity for a Pub/Sub Lite topic.Pub/Sub Lite storage capacity is aconfigurable property of the topic. Throughput capacity is based on thecapacity of the configuredreservation,and inherent or configured per-partition limits.
Authentication and security
Apache Kafka supports several open authentication and encryption mechanisms.With Pub/Sub Lite, authentication is based on theIAM system. Security is assured through encryption at restand in transit. Read more about Pub/Sub Lite authentication in theMigration workflow section, later in this document.
Map Kafka properties to Pub/Sub Lite properties
Kafka has many configuration options that control topic structure,limits, and broker properties. Some common ones useful for data ingestionare discussed in this section, with their equivalents inPub/Sub Lite. As Pub/Sub Lite is amanaged system, you don't have to considermany broker properties.
Topic configuration properties
| Kafka property | Pub/Sub Lite property | Description |
| retention.bytes | Storage per partition | All the partitions in a Lite topic have the same configured storage capacity. The total storage capacity of a Lite topic is the sum of the storage capacity of all the partitions in the topic. |
| retention.ms | Message retention period | The maximum amount of time for which a Lite topic stores messages. If you don't specify a message retention period, the Lite topic stores messages until you exceed the storage capacity. |
| flush.ms,acks | Not configurable in Pub/Sub Lite | Publishes are not acknowledged until they are guaranteed to be persisted to replicated storage. |
| max.message.bytes | Not configurable in Pub/Sub Lite | 3.5 MiB is the maximum message size that can be sent to Pub/Sub Lite. Message sizes are calculated in arepeatable manner. |
| message.timestamp.type | Not configurable in Pub/Sub Lite | When using the consumer implementation, the event timestamp is chosen when present or the publish timestamp is used in its stead. Both publish and event timestamps are available when using Beam. |
To learn more about the Lite topic properties,seeProperties of a Lite topic.
Producer configuration properties
Pub/Sub Lite supports theProducer wire protocol.Some properties change the behavior of the producer Cloud Client Libraries;some common ones are discussed in the following table.
| Kafka property | Pub/Sub Lite property | Description |
| auto.create.topics.enable | Not configurable in Pub/Sub Lite | Create a topic and a subscription that is roughly equivalent to a consumer group for a single topic in Pub/Sub Lite. You can use the console, gcloud CLI, API, or the Cloud Client Libraries. |
| key.serializer,value.serializer | Not configurable in Pub/Sub Lite | Required when using the Kafka Producer or equivalent library communicating using the wire protocol. |
| batch.size | Supported in Pub/Sub Lite | Batching is supported. The recommended value for this value is 10 MiB for best performance. |
| linger.ms | Supported in Pub/Sub Lite | Batching is supported. The recommended value for this value is 50 ms for best performance. |
| max.request.size | Supported in Pub/Sub Lite | The server imposes a limit of 20 MiB per batch. Set this value to lower than 20 MiB in your Kafka client. |
| enable.idempotence | Supported in Pub/Sub Lite | |
| compression.type | Not supported in Pub/Sub Lite | You must explicitly set this value tonone. |
Consumer configuration properties
Pub/Sub Lite supports theConsumer wire protocol.Some properties change the behavior of the consumer Cloud Client Libraries;some common ones are discussed in the following table.
| Kafka property | Description |
| key.deserializer,value.deserializer | Required when using the Kafka Consumer or equivalent library communicating using the wire protocol. |
| auto.offset.reset | This configuration is not supported or needed. Subscriptions are guaranteed to have a defined offset location after they are created. |
| message.timestamp.type | The publish timestamp is always available from Pub/Sub Lite and guaranteed to be non-decreasing on a per partition basis. Event timestamps may or may not be present depending on if they were attached to the message when published. Both publish and event timestamps are available at the same time when using Dataflow. |
| max.partition.fetch.bytes,max.poll.records | Imposes a soft limit on the number of records and bytes returned from poll() calls and the number of bytes returned from internal fetch requests. The default for `max.partition.fetch.bytes` of 1MiB may limit your client's throughput- consider raising this value. |
Compare Kafka and Pub/Sub Lite features
The following table compares Apache Kafka features with Pub/Sub Lite features:
| Feature | Kafka | Pub/Sub Lite |
| Message ordering | Yes | Yes |
| Message deduplication | Yes | Yes usingDataflow |
| Push subscriptions | No | Yes usingPub/Sub export |
| Transactions | Yes | No |
| Message storage | Limited by available machine storage | Unlimited |
| Message replay | Yes | Yes |
| Logging and monitoring | Self-managed | Automated withCloud Monitoring |
| Stream processing | Yes withKafka Streams,Apache Beam, orDataproc. | Yes with Beam or Dataproc. |
The following table compares what functionality is self-hosted with Kafka and what functionality is managed by Google by using Pub/Sub Lite:
| Feature | Kafka | Pub/Sub Lite |
| Availability | Manually deploy Kafka to additional locations. | Deployed across the world. Seelocations. |
| Disaster recovery | Design and maintain your own backup and replication. | Managed by Google. |
| Infrastructure management | Manually deploy and operate virtual machines (VMs) or machines. Maintain consistent versioning and patches. | Managed by Google. |
| Capacity planning | Manually plan storage and compute needs in advance. | Managed by Google. You can increase compute and storage at any time. |
| Support | None. | 24-hour on-call staff and support available. |
Kafka and Pub/Sub Lite cost comparison
The way you estimate and manage costs in Pub/Sub Liteis different than in Kafka. The costs for a Kafka cluster on-premises or incloud include the cost of machines, disk, networking, inbound messages, and outboundmessages. It also includes overhead costs for managing and maintaining thesesystems and their related infrastructure. When managing a Kafka cluster,you must manually upgrade the machines, plan cluster capacity, andimplement disaster recovery that includes extensive planning and testing.You must aggregate all these various costs to determine your truetotal cost of ownership (TCO).
Pub/Sub Lite pricing includes the reservation cost(published bytes, subscribed bytes,bytes handled by the Kafka proxy) andthe cost of provisioned storage. You pay for exactly the resources that youreserve in addition to outbound message charges. You can use thepricing calculator to provide an estimate of your costs.
Migration workflow
To migrate a topic from a Kafka cluster toPub/Sub Lite, use the following instructions.
Configure Pub/Sub Lite resources
Create a Pub/Sub Litereservation for the expected throughputfor all the topics that you're migrating.
Use the Pub/Sub Litepricing calculator to calculate the aggregatethroughput metrics of your existing Kafka topics. For more informationabout how to create reservations,seeCreate and manage Lite reservations.
Create one Pub/Sub Lite topic for each corresponding topicin Kafka.
For more information about how to create Lite topics,seeCreate and manage Lite topics.
Create one Pub/Sub Lite subscription for each correspondingconsumer group and topic pair in the Kafka cluster.
For example, for a consumer group named
consumersthat consumes fromtopic-aandtopic-b, you must create a subscriptionconsumers-aattached totopic-aand a subscriptionconsumers-battached totopic-b.For more information about how to create subscriptions,seeCreate and manage Lite subscriptions.
Authenticate to Pub/Sub Lite
Based on the type of your Kafka client, choose one of the following methods:
Java-based Kafka clients running version 3.1.0 or later with rebuilding
Java-based Kafka clients running version 3.1.0 or later without rebuilding
Java-based Kafka clients version 3.1.0 or later with rebuilding
For Java-based Kafka clients of version 3.1.0 or later that can be rebuilton the instance where you're running the Kafka client:
Install the
com.google.cloud:pubsublite-kafka-authpackage.Obtain the necessary parameters for authenticating toPub/Sub Lite with the help of
com.google.cloud.pubsublite.kafka.ClientParameters.getParams.The
getParams()method (see acode sample ) initializes the followingJAAS andSASL configurations as parameters for authenticating toPub/Sub Lite:security.protocol=SASL_SSLsasl.mechanism=OAUTHBEARERsasl.oauthbearer.token.endpoint.url=http://localhost:14293sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandler
Java-based Kafka clients running version 3.1.0 or later without rebuilding
For Kafka clients that supportKIP-768, we supportconfiguration-only OAUTHBEARER authentication that uses a Python sidecar script.These versions include theJanuary 2022 Java version 3.1.0 or later.
Perform the following steps on the instance where you're running your Kafka client:
Install Python 3.6 or higher.
Install the Google authentication package:
pip install google-authThis library simplifies the various server-to-server authenticationmechanisms to access Google APIs. See the
google-authpage.Run the
kafka_gcp_credentials.pyscript.This script starts a local HTTP server and fetches the default Google Cloud credentialsin the environment using
google.auth.default().Theprincipal in the fetched credentialsmust have the
pubsublite.locations.openKafkaStreampermission for the Google Cloud project you are using and the location to whichyou are connecting. Pub/Sub Lite Publisher(roles/pubsublite.publisher) and Pub/Sub Lite Subscriber(roles/pubsublite.subscriber) roles have thisrequired permission. Add these roles to yourprincipal.The credentials are used in theSASL/OAUTHBEARER authenticationfor the Kafka client.
The following parameters are required in your propertiesto authenticate to Pub/Sub Lite from the Kafka client:
security.protocol=SASL_SSLsasl.mechanism=OAUTHBEARERsasl.oauthbearer.token.endpoint.url=localhost:14293sasl.login.callback.handler.class=org.apache.kafka.common.security.oauthbearer.secured.OAuthBearerLoginCallbackHandlersasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule\requiredclientId="unused"clientSecret="unused"\extension_pubsubProject="PROJECT_ID";ReplacePROJECT_ID with the ID of your project running Pub/Sub Lite.
All other clients without rebuilding
For all other clients, perform the following steps:
Download aservice account key JSON filefor the service account that you intend to use for your client.
Encode the service account file by using base64-encode to use as yourauthentication string.
On Linux or macOS systems, you can use the
base64command(often installed by default) as follows:base64 <my_service_account.json >password.txtYou can use the contents of the password file for authenticationwith the following parameters.
Java
security.protocol=SASL_SSLsasl.mechanism=PLAINsasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \ username="PROJECT_ID" \ password="contents of base64 encoded password file";
ReplacePROJECT_ID with the ID of your project running Pub/Sub.
librdkafka
security.protocol=SASL_SSLsasl.mechanism=PLAINsasl.username=PROJECT_IDsasl.password=contents of base64 encoded password file
ReplacePROJECT_ID with the ID of your project running Pub/Sub..
Clone data using Kafka Connect
The Pub/Sub Lite team maintains an implementation of aKafka Connect Sink. You can configure this implementation to copy datafrom a Kafka topic to a Pub/Sub Lite topic using aKafka Connect cluster.
To configure the connector to perform the data copy, seePub/Sub Group Kafka Connector.
If you want to ensure that partition affinity is unaffected by the migrationprocess, ensure that the kafka topic and Pub/Sub Lite topic have thesame number of partitions, and that thepubsublite.ordering.modeproperty is set toKAFKA. This causes the connector to route messages tothe Pub/Sub Lite partition with the same index as the kafka partitionthey were originally published to.
Migrate consumers
Pub/Sub Lite's resource model is different than Kafka's. Most notably,unlike a consumer group, a subscription is an explicit resource and isassociated with exactly one topic. Because of this difference, any place in theKafka Consumer API that requires atopic to be passed, the full subscriptionpath must be passed instead.
In addition to the SASL configurations for the Kafka client,the following settings are also required when using the Kafka Consumer API tointeract with Pub/Sub Lite.
bootstrap.servers=REGION-kafka-pubsub.googleapis.com:443group.id=unusedReplaceREGION with theregion where your Pub/Sub Lite subscription exists.
Before starting the first Pub/Sub Lite consumer job for a givensubscription, you can initiate (but don't wait on) anadmin seek operation to set the initial location foryour consumer.
When you start your consumers, they reconnect to the current offset inthe message backlog. Run both the old andnew clients in parallel as long as it takes to verify their behavior,then turn down the old consumer clients.
Migrate producers
In addition to the SASL configurations for the Kafka client,the following is also required as a producer param when using theKafka Producer API to interact with Pub/Sub Lite.
bootstrap.servers=REGION-kafka-pubsub.googleapis.com:443ReplaceREGION with theregion where your Pub/Sub Lite topic exists.
After you migrateall the consumers of the topic to readfrom Pub/Sub Lite, move your producer traffic to write toPub/Sub Lite directly.
Gradually migrate the producer clients to write to thePub/Sub Lite topic instead of the Kafka topic.
Restart the producer clients to pick up new configurations.
Turn down Kafka Connect
After you migrateall the producers to write toPub/Sub Lite directly, the connector does not copy data anymore.
You can turn down the Kafka Connect instance.
Troubleshoot Kafka connections
Since Kafka clients communicate through a bespoke wire protocol,we cannot provide error messages for failures in all requests.Rely on the error codes sent as part of the message.
You can see more details about errors that occur in the client bysetting the logging level for theorg.apache.kafka prefix toFINEST.
Low throughput and increasing backlog
There are multiple reasons why you might be seeing low throughputand an increasing backlog. One reason might be insufficient capacity.
You can configure throughput capacity at the topic level or byusing reservations. If insufficient throughput capacity for subscribe andpublish is configured, the corresponding throughput forsubscribe and publish is throttled.
This throughput error is signaled by thetopic/flow_control_status metricfor publishers, and thesubscription/flow_control_statusmetric for subscribers. The metric provides the following states:
NO_PARTITION_CAPACITY: This message indicates that the per-partitionthroughput limit is reached.NO_RESERVATION_CAPACITY: This message indicates that the per-reservationthroughput limit is reached.
You can view the utilization graphs for the topic or reservationpublish and subscribe quota and check whether utilization is at or near 100%.
To resolve this issue, increase thethroughput capacity of thetopic or reservation.
Topic authorization failed error message
Publishing by using the Kafka API requires the Lite service agent to have theright permissions to publish to the Pub/Sub Lite topic.
You get the errorTOPIC_AUTHORIZATION_FAILED in your client in the eventthat you don't have the correct permissions to publish to thePub/Sub Lite topic.
To resolve the issue, check if the Lite service agent for the project passedin the auth configuration.
Invalid topic error message
Subscribing by using the Kafka API requires passing the full subscription pathall places where atopic is expected in the Kafka Consumer API.
You get the errorINVALID_TOPIC_EXCEPTION in your Consumer client if you don'tpass a well formatted subscription path.
Invalid request when not using reservations
Using kafka wire protocol support requires that all topics have an associatedreservation in order to charge for usage.
Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2026-02-19 UTC.