- Notifications
You must be signed in to change notification settings - Fork162
Apache Iggy: Hyper-Efficient Message Streaming at Laser Speed
License
apache/iggy
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Iggy is a persistent message streaming platform written in Rust, supporting QUIC, TCP (custom binary specification) and HTTP (regular REST API) transport protocols,capable of processing millions of messages per second at ultra-low latency.
Iggy providesexceptionally high throughput and performance while utilizing minimal computing resources.
This isnot yet another extension running on top of existing infrastructure, such as Kafka or SQL database.
Iggy is a persistent message streaming logbuilt from the ground up using low-level I/O for speed and efficiency.
The name is an abbreviation for the Italian Greyhound - small yet extremely fast dogs, the best in their class. See the lovelyFabio & Cookie ❤️
- Highly performant, persistent append-only log for message streaming
- Very high throughput for both writes and reads
- Low latency and predictable resource usage thanks to the Rust compiled language (no GC)
- User authentication and authorization with granular permissions and Personal Access Tokens (PAT)
- Support for multiple streams, topics and partitions
- Support formultiple transport protocols (QUIC, TCP, HTTP)
- Fully operational RESTful API which can be optionally enabled
- Available client SDK in multiple languages
- Works directly with binary data, avoiding enforced schema and serialization/deserialization overhead
- Customzero-copy (de)serialization, which greatly improves the performance and reduces memory usage.
- Configurable server features (e.g. caching, segment size, data flush interval, transport protocols etc.)
- Server-side storage ofconsumer offsets
- Multiple ways of polling the messages:
- By offset (using the indexes)
- By timestamp (using the time indexes)
- First/Last N messages
- Next N messages for the specific consumer
- Possibility ofauto committing the offset (e.g. to achieveat-most-once delivery)
- Consumer groups providing the message ordering and horizontal scaling across the connected clients
- Message expiry with auto deletion based on the configurableretention policy
- Additional features such asserver side message deduplication
- Multi-tenant support via abstraction ofstreams which grouptopics
- TLS support for all transport protocols (TCP, QUIC, HTTPS)
- Connectors - sinks, sources and data transformations based on thecustom Rust plugins
- Optional server-side as well as client-sidedata encryption using AES-256-GCM
- Optional metadata support in the form ofmessage headers
- Optionaldata backups and archiving to disk orS3 compatible cloud storage (e.g. AWS S3)
- Support forOpenTelemetry logs & traces + Prometheus metrics
- Built-inCLI to manage the streaming server installable via
cargo install iggy-cli
- Built-inbenchmarking app to test the performance
- Single binary deployment (no external dependencies)
- Running as a single node (clustering based on Viewstamped Replication will be implemented in the near future)
This is the high-level architecture of the Iggy message streaming server, where extremely high performance and ultra low and stable tail latencies are the primary goals. The server is designed to handle high throughput and very low latency (sub-millisecond tail latencies), making it suitable for real-time applications. For more details, please refer to thedocumentation.
The latest released version is0.4.300
for Iggy server, which is compatible with0.6
Rust client SDK and the others.
The recent improvements based on the zero-copy (de)serialization, along with updated SDKs etc. will be available in the upcoming release with Iggy server0.5.0
, Rust SDK0.7
and all the other SDKs.
- Shared-nothing design andio_uring support (PoC on experimental branch, WiP on the main branch)
- Clustering & data replication based onVSR (on sandbox project using Raft, will be implemented after shared-nothing design is completed)
- Rust
- C#
- Java
- Go
- Python
- Node
- C++
- Elixir
The brand new, rich, interactive CLI is implemented under thecli
project, to provide the best developer experience. This is a great addition to the Web UI, especially for all the developers who prefer using the console tools.
Iggy CLI can be installed withcargo install iggy-cli
and then simply accessed by typingiggy
in your terminal.
There's a dedicated Web UI for the server, which allows managing the streams, topics, partitions, browsing the messages and so on. This is an ongoing effort to build a comprehensive dashboard for administrative purposes of the Iggy server. Check the Web UI in the/web
directory. Thedocker image for Web UI is available, and can be fetched viadocker pull iggyrs/iggy-web-ui
.
The official images can be foundin Docker Hub, simply typedocker pull apache/iggy
to pull the image.
Please note that the images tagged aslatest
are based on the official, stable releases, while theedge
ones are updated directly from latest version of themaster
branch.
You can find theDockerfile
anddocker-compose
in the root of the repository. To build and start the server, run:docker compose up
.
Additionally, you can run theCLI
which is available in the running container, by executing:docker exec -it iggy-server /iggy
.
Keep in mind that running the container on the OS other than Linux, where the Docker is running in the VM, might result in the performance degradation.
The default configuration can be found inserver.toml
file inconfigs
directory.
The configuration file is loaded from the current working directory, but you can specify the path to the configuration file by settingIGGY_CONFIG_PATH
environment variable, for exampleexport IGGY_CONFIG_PATH=configs/server.toml
(or other command depending on OS).
When config file is not found, the default values from embeddedserver.toml
file are used.
For the detailed documentation of the configuration file, please refer to theconfiguration section.
Build the project (the longer compilation time is due toLTO enabled in releaseprofile:
cargo build
Run the tests:
cargo test
Start the server:
cargo run --bin iggy-server
For configuration options and detailed help:
cargo run --bin iggy-server -- --help
You can also use environment variables to override any configuration setting:
Override TCP address
IGGY_TCP_ADDRESS=0.0.0.0:8090 cargo run --bin iggy-server
Set custom data path
IGGY_SYSTEM_PATH=/data/iggy cargo run --bin iggy-server
Enable HTTP transport
IGGY_HTTP_ENABLED=true cargo run --bin iggy-server
To quickly generate the sample data:
cargo run --bin data-seeder-tool
Please note that all commands below are usingiggy
binary, which is part of release (cli
sub-crate).
Create a stream with namedev
(numerical ID will be assigned by server automatically) using default credentials andtcp
transport (available transports:quic
,tcp
,http
, defaulttcp
):
cargo run --bin iggy -- --transport tcp --username iggy --password iggy stream create dev
List available streams:
cargo run --bin iggy -- --username iggy --password iggy stream list
Getdev
stream details:
cargo run --bin iggy -- -u iggy -p iggy stream get dev
Create a topic namedsample
(numerical ID will be assigned by server automatically) for streamdev
, with 2 partitions (IDs 1 and 2), disabled compression (none
) and disabled message expiry (skipped optional parameter):
cargo run --bin iggy -- -u iggy -p iggy topic create dev sample 2 none
List available topics for streamdev
:
cargo run --bin iggy -- -u iggy -p iggy topic list dev
Get topic details for topicsample
in streamdev
:
cargo run --bin iggy -- -u iggy -p iggy topic get dev sample
Send a message 'hello world' (message ID 1) to the streamdev
to topicsample
and partition 1:
cargo run --bin iggy -- -u iggy -p iggy message send --partition-id 1 dev sample "hello world"
Send another message 'lorem ipsum' (message ID 2) to the same stream, topic and partition:
cargo run --bin iggy -- -u iggy -p iggy message send --partition-id 1 dev sample "lorem ipsum"
Poll messages by a regular consumer with ID 1 from the streamdev
for topicsample
and partition with ID 1, starting with offset 0, messages count 2, without auto commit (storing consumer offset on server):
cargo run --bin iggy -- -u iggy -p iggy message poll --consumer 1 --offset 0 --message-count 2 --auto-commit dev sample 1
Finally, restart the server to see it is able to load the persisted data.
The HTTP API endpoints can be found inserver.http file, which can be used withREST Client extension for VS Code.
To see the detailed logs from the CLI/server, run it withRUST_LOG=trace
environment variable. See images below:
You can find comprehensive sample applications under theexamples/rust
directory. These examples showcase various usage patterns of the Iggy client SDK, from basic operations to advanced multi-tenant scenarios.
For detailed information about available examples and how to run them, please see theExamples README.
Iggy comes with the Rust SDK, which is available oncrates.io.
The SDK provides both, low-level client for the specific transport, which includes the message sending and polling along with all the administrative actions such as managing the streams, topics, users etc., as well as the high-level client, which abstracts the low-level details and provides the easy-to-use API for both, message producers and consumers.
You can find the more examples, including the multi-tenant one under theexamples
directory.
// Create the Iggy clientlet client =IggyClient::from_connection_string("iggy://user:secret@localhost:8090")?;// Create a producer for the given stream and one of its topicsletmut producer = client.producer("dev01","events")?.batch_length(1000).linger_time(IggyDuration::from_str("1ms")?).partitioning(Partitioning::balanced()).build();producer.init().await?;// Send some messages to the topiclet messages =vec![Message::from_str("Hello Apache Iggy")?];producer.send(messages).await?;// Create a consumer for the given stream and one of its topicsletmut consumer = client.consumer_group("my_app","dev01","events")?.auto_commit(AutoCommit::IntervalOrWhen(IggyDuration::from_str("1s")?,AutoCommitWhen::ConsumingAllMessages,)).create_consumer_group_if_not_exists().auto_join_consumer_group().polling_strategy(PollingStrategy::next()).poll_interval(IggyDuration::from_str("1ms")?).batch_length(1000).build();consumer.init().await?;// Start consuming the messageswhileletSome(message) = consumer.next().await{// Handle the message}
Benchmarks should be the first-class citizens. We believe that performance is crucial for any system, and we strive to provide the best possible performance for our users. Please check, why we believe that thetransparentbenchmarking is so important.
We've also built thebenchmarking platform where anyone can upload the benchmarks and compare the results with others. Source code for the platform is available in thecore/bench/dashboard
directory.
For the benchmarking purposes, we've developed the dedicatediggy-bench tool, which is a part of theiggy project. It is a command-line tool that allows you to run the variety of fully customizable benchmarks.
To benchmark the project, first build the project in release mode:
cargo build --release
Then, run the benchmarking app with the desired options:
Sending (writing) benchmark
cargo run --bin iggy-bench -r -- -v pinned-producer tcp
Polling (reading) benchmark
cargo run --bin iggy-bench -r -- -v pinned-consumer tcp
Parallel sending and polling benchmark
cargo run --bin iggy-bench -r -- -v pinned-producer-and-consumer tcp
Balanced sending to multiple partitions benchmark
cargo run --bin iggy-bench -r -- -v balanced-producer tcp
Consumer group polling benchmark:
cargo run --bin iggy-bench -r -- -v balanced-consumer-group tcp
Parallel balanced sending and polling from consumer group benchmark:
cargo run --bin iggy-bench -r -- -v balanced-producer-and-consumer-group tcp
End to end producing and consuming benchmark (single task produces and consumes messages in sequence):
cargo run --bin iggy-bench -r -- -v end-to-end-producing-consumer tcp
These benchmarks would start the server with the default configuration, create a stream, topic and partition, and then send or poll the messages. The default configuration is optimized for the best performance, so you might want to tweak it for your needs. If you need more options, please refer toiggy-bench
subcommandshelp
andexamples
.
For example, to run the benchmark for the already started server, provide the additional argument--server-address 0.0.0.0:8090
.
Depending on the hardware, transport protocol (quic
,tcp
orhttp
) and payload size (messages-per-batch * message-size
) you might expectover 5000 MB/s (e.g. 5M of 1 KB msg/sec) throughput for writes and reads.
Iggy is already capable of processing millions of messages per second at the microseconds range for p99+ latency, and with the upcoming optimizations related to the io_uring support along with the shared-nothing design, it will only get better.
Please refer to the mentionedbenchmarking platform where you can browse the results achieved on the different hardware configurations, using the different Iggy server versions.
Please seeContributing
About
Apache Iggy: Hyper-Efficient Message Streaming at Laser Speed
Topics
Resources
License
Code of conduct
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.