Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Confluent's Kafka Python Client

License

NotificationsYou must be signed in to change notification settings

confluentinc/confluent-kafka-python

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Try Confluent Cloud - The Data Streaming Platform

Confluent's Python Client for Apache KafkaTM

confluent-kafka-python provides a high-levelProducer,Consumer andAdminClient compatible with allApache Kafka™ brokers >= v0.8,Confluent Cloud andConfluent Platform.

Recommended for Production: While this client works with any Kafka deployment, it's optimized for and fully supported withConfluent Cloud (fully managed) andConfluent Platform (self-managed), which provide enterprise-grade security, monitoring, and support.

Why Choose Confluent's Python Client?

Unlike the basic Apache Kafka Python client,confluent-kafka-python provides:

  • Production-Ready Performance: Built onlibrdkafka (C library) for maximum throughput and minimal latency, significantly outperforming pure Python implementations.
  • Enterprise Features: Schema Registry integration, transactions, exactly-once semantics, and advanced serialization support out of the box.
  • AsyncIO Support: Native async/await support for modern Python applications - not available in the Apache Kafka client.
  • Comprehensive Serialization: Built-in Avro, Protobuf, and JSON Schema support with automatic schema evolution handling.
  • Professional Support: Backed by Confluent's engineering team with enterprise SLAs and 24/7 support options.
  • Active Development: Continuously updated with the latest Kafka features and performance optimizations.
  • Battle-Tested: Used by thousands of organizations in production, from startups to Fortune 500 companies.

Performance Note: The Apache Kafka Python client (kafka-python) is a pure Python implementation that, while functional, has significant performance limitations for high-throughput production use cases.confluent-kafka-python leverages the same high-performance C library (librdkafka) used by Confluent's other clients, providing enterprise-grade performance and reliability.

Key Features

  • High Performance & Reliability: Built onlibrdkafka, the battle-tested C client for Apache Kafka, ensuring maximum throughput, low latency, and stability. The client is supported by Confluent and is trusted in mission-critical production environments.
  • Comprehensive Kafka Support: Full support for the Kafka protocol, transactions, and administration APIs.
  • Experimental; AsyncIO Producer: An experimental fully asynchronous producer (AIOProducer) for seamless integration with modern Python applications usingasyncio.
  • Seamless Schema Registry Integration: Synchronous and asynchronous clients for Confluent Schema Registry to handle schema management and serialization (Avro, Protobuf, JSON Schema).
  • Improved Error Handling: Detailed, context-aware error messages and exceptions to speed up debugging and troubleshooting.
  • [Confluent Cloud] Automatic Zone Detection: Producers automatically connect to brokers in the same availability zone, reducing latency and data transfer costs without requiring manual configuration.
  • [Confluent Cloud] Simplified Configuration Profiles: Pre-defined configuration profiles optimized for common use cases like high throughput or low latency, simplifying client setup.
  • Enterprise Support: Backed by Confluent's expert support team with SLAs and 24/7 assistance for production deployments.

Usage

For a step-by-step guide on using the client, seeGetting Started with Apache Kafka and Python.

Choosing Your Kafka Deployment

  • Confluent Cloud - Fully managed service with automatic scaling, security, and monitoring. Best for teams wanting to focus on applications rather than infrastructure.
  • Confluent Platform - Self-managed deployment with enterprise features, support, and tooling. Ideal for on-premises or hybrid cloud requirements.
  • Apache Kafka - Open source deployment. Requires manual setup, monitoring, and maintenance.

Additional examples can be found in theexamples directory or theconfluentinc/examples GitHub repo, which include demonstrations of:

  • Exactly once data processing using the transactional API.
  • Integration with asyncio.
  • (De)serializing Protobuf, JSON, and Avro data with Confluent Schema Registry integration.
  • Confluent Cloud configuration.

Also see thePython client docs and theAPI reference.

Finally, thetests are useful as a reference for example usage.

AsyncIO Producer (experimental)

Use the AsyncIOProducer inside async applications to avoid blocking the event loop.

importasynciofromconfluent_kafka.experimental.aioimportAIOProducerasyncdefmain():p=AIOProducer({"bootstrap.servers":"mybroker"})try:# produce() returns a Future; first await the coroutine to get the Future,# then await the Future to get the delivered Message.delivery_future=awaitp.produce("mytopic",value=b"hello")delivered_msg=awaitdelivery_future# Optionally flush any remaining buffered messages before shutdownawaitp.flush()finally:awaitp.close()asyncio.run(main())

Notes:

  • Batched async produce buffers messages; delivery callbacks, stats, errors, and logger run on the event loop.
  • Per-message headers are not supported in the batched async path. If headers are required, use the synchronousProducer.produce(...) (you can offload to a thread in async apps).

For a more detailed example that includes both an async producer and consumer, seeexamples/asyncio_example.py.

Architecture: For implementation details and component architecture, see theAIOProducer Architecture Overview.

When to use AsyncIO vs synchronous Producer

  • Use AsyncIOProducer when your code runs under an event loop (FastAPI/Starlette, aiohttp, Sanic, asyncio workers) and must not block.
  • Use synchronousProducer for scripts, batch jobs, and highest-throughput pipelines where you control threads/processes and can callpoll()/flush() directly.
  • In async servers, prefer AsyncIOProducer; if you need headers, call syncproduce() viarun_in_executor for that path.

AsyncIO with Schema Registry

The AsyncIO producer and consumer integrate seamlessly with async Schema Registry serializers. See theSchema Registry Integration section below for full details.

Basic Producer example

fromconfluent_kafkaimportProducerp=Producer({'bootstrap.servers':'mybroker1,mybroker2'})defdelivery_report(err,msg):""" Called once for each message produced to indicate delivery result.        Triggered by poll() or flush(). """iferrisnotNone:print('Message delivery failed: {}'.format(err))else:print('Message delivered to {} [{}]'.format(msg.topic(),msg.partition()))fordatainsome_data_source:# Trigger any available delivery report callbacks from previous produce() callsp.poll(0)# Asynchronously produce a message. The delivery report callback will# be triggered from the call to poll() above, or flush() below, when the# message has been successfully delivered or failed permanently.p.produce('mytopic',data.encode('utf-8'),callback=delivery_report)# Wait for any outstanding messages to be delivered and delivery report# callbacks to be triggered.p.flush()

For a discussion on the poll based producer API, refer to theIntegrating Apache Kafka With Python Asyncio Web Applicationsblog post.

Schema Registry Integration

This client provides full integration with Schema Registry for schema management and message serialization, and is compatible with bothConfluent Platform andConfluent Cloud. Both synchronous and asynchronous clients are available.

Learn more

Synchronous Client & Serializers

Use the synchronousSchemaRegistryClient with the standardProducer andConsumer.

fromconfluent_kafkaimportProducerfromconfluent_kafka.schema_registryimportSchemaRegistryClientfromconfluent_kafka.schema_registry.avroimportAvroSerializerfromconfluent_kafka.serializationimportStringSerializer,SerializationContext,MessageField# Configure Schema Registry Clientschema_registry_conf= {'url':'http://localhost:8081'}# Confluent Platform# For Confluent Cloud, add: 'basic.auth.user.info': '<sr-api-key>:<sr-api-secret>'# See: https://docs.confluent.io/cloud/current/sr/index.htmlschema_registry_client=SchemaRegistryClient(schema_registry_conf)# 2. Configure AvroSerializeravro_serializer=AvroSerializer(schema_registry_client,user_schema_str,lambdauser,ctx:user.to_dict())# 3. Configure Producerproducer_conf= {'bootstrap.servers':'localhost:9092','key.serializer':StringSerializer('utf_8'),'value.serializer':avro_serializer}producer=Producer(producer_conf)# 4. Produce messagesproducer.produce('my-topic',key='user1',value=some_user_object)producer.flush()

Asynchronous Client & Serializers (AsyncIO)

Use theAsyncSchemaRegistryClient andAsync serializers withAIOProducer andAIOConsumer. The configuration is the same as the synchronous client.

fromconfluent_kafka.experimental.aioimportAIOProducerfromconfluent_kafka.schema_registryimportAsyncSchemaRegistryClientfromconfluent_kafka.schema_registry._async.avroimportAsyncAvroSerializer# Setup async Schema Registry client and serializer# (See configuration options in the synchronous example above)schema_registry_conf= {'url':'http://localhost:8081'}schema_client=AsyncSchemaRegistryClient(schema_registry_conf)serializer=awaitAsyncAvroSerializer(schema_client,schema_str=avro_schema)# Use with AsyncIO producerproducer=AIOProducer({"bootstrap.servers":"localhost:9092"})serialized_value=awaitserializer(data,SerializationContext("topic",MessageField.VALUE))delivery_future=awaitproducer.produce("topic",value=serialized_value)

Available async serializers:AsyncAvroSerializer,AsyncJSONSerializer,AsyncProtobufSerializer (and corresponding deserializers).

See also:

Import paths

fromconfluent_kafka.schema_registry._async.avroimportAsyncAvroSerializer,AsyncAvroDeserializerfromconfluent_kafka.schema_registry._async.json_schemaimportAsyncJSONSerializer,AsyncJSONDeserializerfromconfluent_kafka.schema_registry._async.protobufimportAsyncProtobufSerializer,AsyncProtobufDeserializer

Client-Side Field Level Encryption (CSFLE): To use Data Contracts rules (including CSFLE), install therules extra (see Install section), and refer to the encryption examples inexamples/README.md. For CSFLE-specific guidance, see theConfluent Cloud CSFLE documentation.

Note: The async Schema Registry interface mirrors the synchronous client exactly - same configuration options, same calling patterns, no unexpected gotchas or limitations. Simply addawait to method calls and use theAsync prefixed classes.

Troubleshooting

  • 401/403 Unauthorized when using Confluent Cloud: Verify yourbasic.auth.user.info (SR API key/secret) is correct and that the Schema Registry URL is for your specific cluster. Ensure you are using an SR API key, not a Kafka API key.
  • Schema not found: Check that yoursubject.name.strategy configuration matches how your schemas are registered in Schema Registry, and that the topic and message field (key/value) pairing is correct.

Basic Consumer example

fromconfluent_kafkaimportConsumerc=Consumer({'bootstrap.servers':'mybroker','group.id':'mygroup','auto.offset.reset':'earliest'})c.subscribe(['mytopic'])whileTrue:msg=c.poll(1.0)ifmsgisNone:continueifmsg.error():print("Consumer error: {}".format(msg.error()))continueprint('Received message: {}'.format(msg.value().decode('utf-8')))c.close()

Basic AdminClient example

Create topics:

fromconfluent_kafka.adminimportAdminClient,NewTopica=AdminClient({'bootstrap.servers':'mybroker'})new_topics= [NewTopic(topic,num_partitions=3,replication_factor=1)fortopicin ["topic1","topic2"]]# Note: In a multi-cluster production scenario, it is more typical to use a replication_factor of 3 for durability.# Call create_topics to asynchronously create topics. A dict# of <topic,future> is returned.fs=a.create_topics(new_topics)# Wait for each operation to finish.fortopic,finfs.items():try:f.result()# The result itself is Noneprint("Topic {} created".format(topic))exceptExceptionase:print("Failed to create topic {}: {}".format(topic,e))

Thread safety

TheProducer,Consumer, andAdminClient are all thread safe.

Install

# Basic installationpip install confluent-kafka# With Schema Registry supportpip install"confluent-kafka[avro,schemaregistry]"# Avropip install"confluent-kafka[json,schemaregistry]"# JSON Schemapip install"confluent-kafka[protobuf,schemaregistry]"# Protobuf# With Data Contract rules (includes CSFLE support)pip install"confluent-kafka[avro,schemaregistry,rules]"

Note: Pre-built Linux wheels do not include SASL Kerberos/GSSAPI support. For Kerberos, see the source installation instructions inINSTALL.md.To use Schema Registry with the Avro serializer/deserializer:

pip install"confluent-kafka[avro,schemaregistry]"

To use Schema Registry with the JSON serializer/deserializer:

pip install"confluent-kafka[json,schemaregistry]"

To use Schema Registry with the Protobuf serializer/deserializer:

pip install"confluent-kafka[protobuf,schemaregistry]"

When using Data Contract rules (including CSFLE) add therulesextra, e.g.:

pip install"confluent-kafka[avro,schemaregistry,rules]"

Install from source

For source install, see theInstall from source section inINSTALL.md.

Broker compatibility

The Python client (as well as the underlying C library librdkafka) supportsall broker versions >= 0.8.But due to the nature of the Kafka protocol in broker versions 0.8 and 0.9 itis not safe for a client to assume what protocol version is actually supportedby the broker, thus you will need to hint the Python client what protocolversion it may use. This is done through two configuration settings:

  • broker.version.fallback=YOUR_BROKER_VERSION (default 0.9.0.1)
  • api.version.request=true|false (default true)

When using a Kafka 0.10 broker or later you don't need to do anything


[8]ページ先頭

©2009-2025 Movatter.jp