- Notifications
You must be signed in to change notification settings - Fork1.1k
Standardized Serverless ML Inference Platform on Kubernetes
License
kserve/kserve
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
KServe provides a KubernetesCustom Resource Definition for serving predictive and generative machine learning (ML) models. It aims to solve production model serving use cases by providing high abstraction interfaces for Tensorflow, XGBoost, ScikitLearn, PyTorch, Huggingface Transformer/LLM models using standardized data plane protocols.
It encapsulates the complexity of autoscaling, networking, health checking, and server configuration to bring cutting edge serving features like GPU Autoscaling, Scale to Zero, and Canary Rollouts to your ML deployments. It enables a simple, pluggable, and complete story for Production ML Serving including prediction, pre-processing, post-processing and explainability. KServe is beingused across various organizations.
For more details, visit theKServe website.
KFServing has been rebranded to KServe since v0.7.
- KServe is a standard, cloud agnosticModel Inference Platform for serving predictive and generative AI models on Kubernetes, built for highly scalable use cases.
- Provides performant,standardized inference protocol across ML frameworks including OpenAI specification for generative models.
- Support modernserverless inference workload withrequest based autoscaling including scale-to-zero onCPU and GPU.
- Provideshigh scalability, density packing and intelligent routing usingModelMesh.
- Simple and pluggable production serving forinference,pre/post processing,monitoring andexplainability.
- Advanced deployments forcanary rollout,pipeline,ensembles withInferenceGraph.
To learn more about KServe, how to use various supported features, and how to participate in the KServe community,please follow theKServe website documentation.Additionally, we have compiled a list ofpresentations and demos to dive through various details.
- Serverless Installation: KServe by default installs Knative forserverless deployment for InferenceService.
- Raw Deployment Installation: Compared to Serverless Installation, this is a morelightweight installation. However, this option does not support canary deployment and request based autoscaling with scale-to-zero.
- ModelMesh Installation: You can optionally install ModelMesh to enablehigh-scale,high-density andfrequently-changing model serving use cases.
- Quick Installation: Install KServe on your local machine.
KServe is an important addon component of Kubeflow, please learn more from theKubeflow KServe documentation. Check out the following guides for runningon AWS oron OpenShift Container Platform.
About
Standardized Serverless ML Inference Platform on Kubernetes