Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A sample RESTful API with Kafka Streams (2.6.0) using Spring Boot (2.3.3) and Java 14.

License

NotificationsYou must be signed in to change notification settings

ben-jamin-chen/springboot-kafka-streams-rest-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

While looking through the Kafka Tutorials to see how I could setup a Spring Boot API project with Kafka Streams, I found it strange that there wasn't a complete or more informative example on how this could be achieved. Most use cases demonstrated how to compute aggregations and how to build simple topologies, but it was difficult to find a concrete example on how to build an API service that could query into these materialized name stores. Anyways, I thought I’d create my own using a more recent version of Spring Boot with Java 14.

What You Need

  • Java 14
  • Maven 3.6.0+
  • Docker 19+

Getting Started

We need to first launch the Confluent services (i.e. Schema Registry, Broker, ZooKeeper) locally by running thedocker-compose up -d CLI command where thedocker-compose.yml file is. Typically, you can create a stack file (in the form of a YAML file) to define your applications. You can also rundocker-compose ps to check the status of the stack. Notice, the endpoints from within the containers on your host machine.

NameFrom within containersFrom host machine
Kafka Brokerbroker:9092localhost:9092
Schema Registryhttp://schema-registry:8081http://localhost:8081
ZooKeeperzookeeper:2181localhost:2181

Note: you can rundocker-compose down to stop all services and containers.

As part of this sample, I've retrofitted the average aggregate example fromConfluent's Kafka Tutorials into this project. The API will calculate and return a running average rating for a given movie identifier. This should demonstrate how to build a basic API service on top of an aggregation result.

Notice in the~/src/main/avro directory, we have all our Avro schema files for the stream ofratings andcountsum. For your convenience, the classes were already generated under the~/src/main/java/io/confluent/demo directory, but feel free to tinker with them and recompile the schemas if needed. The Avro classes can be programmatically generated usingMaven or by manually invoking theschema compiler.

So before building and running the project, open a new terminal and run the following commands to generate your input and output topics.

$  docker-composeexec broker kafka-topics --create --bootstrap-server \   localhost:9092 --replication-factor 1 --partitions 1 --topic ratings$  docker-composeexec broker kafka-topics --create --bootstrap-server \   localhost:9092 --replication-factor 1 --partitions 1 --topic rating-averages

Next, we will need to produce some data onto the input topic.

$  dockerexec -i schema-registry /usr/bin/kafka-avro-console-producer --topic ratings --broker-list broker:9092\    --property"parse.key=false"\    --property"key.separator=:"\    --property value.schema="$(< src/main/avro/rating.avsc)"

Paste in the followingjson data when prompted and be sure to press enter twice to actually submit it.

{"movie_id":362,"rating":10}{"movie_id":362,"rating":8}

Optionally, you can also see the consumer results on the output topic by running this command on a new terminal window:

$  dockerexec -it broker /usr/bin/kafka-console-consumer --topic rating-averages --bootstrap-server broker:9092 \    --property"print.key=true"\    --property"key.deserializer=org.apache.kafka.common.serialization.LongDeserializer" \    --property"value.deserializer=org.apache.kafka.common.serialization.DoubleDeserializer" \    --from-beginning

Build and Run the Sample

You can import the code straight into your preferred IDE or run the sample using the following command (in the root project folder).

$  mvn spring-boot:run

After the application runs, navigate tohttp://localhost:7001/swagger-ui/index.html?configUrl=/api-docs/swagger-config in your web browser to access the Swagger UI. If you used the same sample data from above, you can enter362 as themovieId and it should return something similar like this below:

{"movieId":362,"rating":9}

Note: keep in mind the variousstates of the client. When a Kafka Streams instance is inRUNNING state, it allows for inspection of the stream's metadata using methods likequeryMetadataForKey(). While it is inREBALANCING state, the REST service cannot immediately answer requests until the state stores are fully rebuilt.

Troubleshooting

  • In certain conditions, you may need to do a complete application reset. You can delete the application’s local state directory where the application instance was run. In this project, Kafka Streams persists local states under the~/data folder.

About

A sample RESTful API with Kafka Streams (2.6.0) using Spring Boot (2.3.3) and Java 14.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages


[8]ページ先頭

©2009-2025 Movatter.jp