Kurrent seamlessly connects real-time and historical data to your machine learning workflows, simplifying processes and maximizing efficiency
ML workflows depend on data that is accurate, complete, and, ideally, delivered with consistent meaning and format.
Traditional data pipelines fall short by taking snapshots of data, introducing gaps and errors into data sets, and completely changing the meaning of data over time without passing on this context within the data sets they process.
Events provide better context to machine learning algorithms because they describe the effect (the event type) that a system command/workflow action had on a business object (state) in a very efficient and concise way.
Traditional data pipelines are lossy.
Kurrent changes the equation and makes your data pipelines lossless.
Capture every significant data point in real-time and preserve historical records for deep analysis.
Deliver context-rich, structured events directly into your ML models without transformation.
Geveraging the community-built Kurrent python client allows data scientists to directly stream events and replay streams of events to ML models without the need for ETL jobs
Events can flow directly into your models, maintaining context.
Get value from your data quickly accessed using fine-grained, indexed, streams.
Reduce or eliminate the need for complex ETL jobs, data preparation tooling, as well as imputation steps. Kurrent also provides built-in auditability of the entire span of each dataset.
Kurrent empowers your ML workflows by delivering event-native data directly to your models.
This ensures:
Seamless access to high-fidelity data
Simplified infrastructure with fewer moving parts
Enhanced model performance with detailed context
Reduced time spent on data cleansing and interpolation of gaps in data sets