- Notifications
You must be signed in to change notification settings - Fork2
MLOps Implementing "Brain Computer Interface" on Kubernetes
License
jerife/MLOps-on-kubernetes
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
Kubernetes provides flexible control over containers through orchestrations such as Scheduling, Load balancing, and Scaling.
Therefore, it is suitable to systematically build and operate all configurations of ML projects, including Data collection, Preprocessing, Feature extraction, Data validation, Monitoring, and Deploying.
In particular, containers packaged with train code can be run on nodes with GPUs, and containers packaged with data preprocessing code can be run on nodes with plenty of memory.
Also, because it is managed as a Kubernette container, Dockerfile ensure the same environment as engineers.
- Kubernetes: Deploy a Kubernetes cluster with the Google Kubernetes Engine.
- ML PIPELINE: Build an ML PIPELINE that learns and deploys the model only by entering parameters with Kubeflow.
- Data Storage: Manage your data with Google Cloud Storage.
- Experiment Tracking & AutoML: Use Weight & Biases to track the experiment and find the optimal Hyperparameter.
- Model Versioning: Manage and save models by version with Mlflow.
- Model Serving: API communication with user through BentoML.
- Monitoring: Monitor the cluster's resources with Prometheus & Grafana.
Pipeline has two conditions depending on the user's parameter input.
- Condition 1: Hyperparameter tunning- Condition 2: TrainCondition 1. Hyperparameter tunning 🔍
If Condition: Hyperparameter tuning, the model is not trained and only tuning is performed.
The number of tuning can be controlled through input variables, and the tuning process can be checked through Weight & Biases Porject.
Condition 2. Train 🛠
In "Condition: Train", the model is trained according to the input parameters, and if you do not enter the parameters, the model is trained with the default value.
The trained model is compared to the model registered in the mlflow, and if it has better accuracy, it is mlflow uploaded and versioning.
If the model is stored in Mlflow, the model will is pushed to BentoML.
Pushed models are deployed as desired by the user. (eg. CPU, GPU, Memory etc)
Customize Prometheus & Grafana to monitor clusters.
$ curl\ -X POST \ -H"content-type: application/json" \ --data"[[[1.1, 2.2, 3.3, 4.4], ... 5.5, 6.6, 7.7, 8.8]]]"\# shape: N x 22 x 750 https://demo-default-yatai-127-0-0-1.apps.yatai.dev/classify >>>"left"
Currently, there are many problems with BentoML1.0 in the GKE environment, so it is registered as an issue, and the writing is temporarily written.-> Issue:bentoml/Yatai#322If this issue is resolved, then i will apply bentoml again.
In this task, I aim to build a motor image (MI) task, which is mainly covered in the Brain Computer Interface, into MLOps
(ie. MI task: input brain waves generated when imagining moving into the model to derive results)
BCI Competition IV 2a Dataset (Classification of EEG signals affected by eye movement artifacts)
- Preprocessing
- Band Pass Filter 8~30Hz
- Segment the data into trainable shapes
- Feature Extraction
- Common Spatial Pattern
- Modeling
- Support Vector Machine
Github:@jeirfeWebsite:jerife.github.ioEmail:jerife@naver.comCopyright © 2022 jerife.This project is Apache-2.0 licensed.
About
MLOps Implementing "Brain Computer Interface" on Kubernetes
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.







