- Notifications
You must be signed in to change notification settings - Fork20
A flexible, high-performance serving system for machine learning models
License
iqiyi/xgboost-serving
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
This is a fork of TensorFlow Serving, extended with the support forXGBoost,alphaFM andalphaFM_softmax frameworks. For more information about TensorFlow Serving, switch to themaster branch or visit theTensorFlow Serving website.
XGBoost Serving is a flexible, high-performance serving system forXGBoost && FM models, designed for production environments. It deals withtheinference aspect of XGBoost && FM models, taking models aftertraining andmanaging their lifetimes, providing clients with versioned access viaa high-performance, reference-counted lookup table.XGBoost Serving derives from TensorFlow Serving and is used widely inside iQIYI.
To note a few features:
- Can serve multiple models, or multiple versions of the same modelsimultaneously
- Exposes gRPC inference endpoints
- Allows deployment of new model versions without changing any client code
- Supports canarying new versions and A/B testing experimental models
- Adds minimal latency to inference time due to efficient, low-overheadimplementation
- Supports XGBoost servables, XGBoost && FM servables and XGBoost && alphaFM_Softmax servables
- Supports computation latency distribution statistics
The easiest and most straight-forward way of building and using XGBoost Servingis with Docker images. We highly recommend this route unless you have specificneeds that are not addressed by running in a container.
In order to serve a XGBoost && FM model, simply export your XGBoot model, leafmapping and FM model.
Please refer toExport XGBoost && FM modelfor details about the models's specification and how to export XGBoost && FM model.
- Follow a tutorial on Serving XGBoost && FM models
- Configure XGBoost Serving to make it fit your serving use case
XGBoost Serving derives from TensorFlow Serving and thanks to Tensorflow Serving's highly modular architecture. You can use some partsindividually and/or extend it to serve new use cases.
- Ensure you are familiar with building Tensorflow Serving
- Learn about Tensorflow Serving's architecture
- Explore the Tensorflow Serving C++ API reference
- Create a new type of Servable
- Create a custom Source of Servable versions
If you'd like to contribute to XGBoost Serving, be sure to review thecontribution guidelines.
- Report bugs, ask questions or give suggestions byGithubIssues
About
A flexible, high-performance serving system for machine learning models
Resources
License
Contributing
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.