Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

This project is focused on the Deployment phase of machine learning. The Docker and FastAPI are used to deploy a dockerized server of trained machine learning pipeline.

NotificationsYou must be signed in to change notification settings

Arslan-Mehmood1/Machine-learning-pipeline-deployment-using-Docker-and-FastAPI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This project is about deploying the trained machine learning pipeline using FastAPI and Docker. The ml pipeline which will be deployed is taken from my repositoryCredit-Risk-Analysis-for-european-peer-to-peer-lending-firm-Bandora.

The ml pipeline includes a RandomForestClassifier for classifying the loan borrowers asdefaulted / not-defaulted.

Building API using FastAPI framework

The API code must be in'main.py' file within a directory'app' according to FastAPI guidelines.

Import the required packages

# Imports for serverimportpickleimportpandasaspdfromfastapiimportFastAPIfrompydanticimportBaseModel# App nameapp=FastAPI(title="Loan Default Classifier for lending firm Bandora")

Representing the loan data point

To represent a sample of loan details along with the data type of each atttribute, a class needs to be defined using theBaseModel from the pydantic library.

# defining base class for Loan to represent a data point for predictionsclassLoan(BaseModel):LanguageCode :objectHomeOwnershipType :objectRestructured :objectIncomeTotal :floatLiabilitiesTotal :floatLoanDuration :floatAppliedAmount :floatAmount :floatInterest :floatEMI :floatPreviousRepaymentsBeforeLoan :floatMonthlyPaymentDay :floatPrincipalPaymentsMade :floatInterestAndPenaltyPaymentsMade :floatPrincipalBalance :floatInterestAndPenaltyBalance :floatBids :floatRating :object

Loading the trained Machine learning pipeline

The trained machine learning pipeline needs to be loaded into memory, so it can be used for predictions in future.

One way is to load the machine learning pipeline during the startup of ourServer. To do this, the function needs to be decorated with@app.on_event("startup"). This decorator ensures that the function loading the ml pipeline is triggered right when the Server starts.

The ml pipeline is stored in `app/ML_artifact' directory.

@app.on_event("startup")defload_ml_pipeline():# loading the machine learning pipeline from pickle .sav formatglobalRFC_pipelineRFC_pipeline=pickle.load(open('app/ML_artifact/RFC_pipeline.sav','rb'))

Server Endpoint for Prediction

Finally, an endpoint on our server handles theprediction requests and return the value predicted by our deployed ml pipeline.

The endpoint isserver/predict with aPOST operation.

Finally, a JSON response is returned containing the prediction

# Defining the function for handling the prediction requests, it will be run by ```/predict``` endpoint of server# and expects an instance inference request of Loan class to make prediction@app.post("/predict")defpredict(inference_request :Loan):# creating a pandas dataframe to be fed to RandomForestClassifier pipeline for predictioninput_dictionary= {"LanguageCode" :inference_request.LanguageCode,"HomeOwnershipType":inference_request.HomeOwnershipType,"Restructured" :inference_request.Restructured,"IncomeTotal" :inference_request.IncomeTotal,"LiabilitiesTotal" :inference_request.LiabilitiesTotal,"LoanDuration" :inference_request.LoanDuration,"AppliedAmount" :inference_request.AppliedAmount,"Amount":inference_request.Amount,"Interest":inference_request.Interest,"EMI":inference_request.EMI,"PreviousRepaymentsBeforeLoan" :inference_request.PreviousRepaymentsBeforeLoan,"MonthlyPaymentDay" :inference_request.MonthlyPaymentDay,"PrincipalPaymentsMade" :inference_request.PrincipalPaymentsMade,"InterestAndPenaltyPaymentsMade" :inference_request.InterestAndPenaltyPaymentsMade,"PrincipalBalance" :inference_request.PrincipalBalance,"InterestAndPenaltyBalance" :inference_request.InterestAndPenaltyBalance,"Bids" :inference_request.Bids,"Rating" :inference_request.Rating    }inference_request_Data=pd.DataFrame(input_dictionary,index=[0])prediction=RFC_pipeline.predict(inference_request_Data)# Returning predictionifprediction==0:return {"Prediction":"Not Defaulted"}else:return {"Prediction":"Defaulted"}

Server

As our API has been built, the Uvicorn Server can be use the API to serve the prediction requests. But for now, this server will be dockerized. And final predictions will be served by the Docker container.

Dockerizing the Server

The Docker container will be run on localhost.

..└── Base dir    ├── app/    │   ├── main.py (server code)    │   └── ML_artifact (dir containing the RFC_pipeline.sav)    ├── requirements.txt (Python dependencies)    ├── loan-examples/ (loan examples to test the server)    ├── README.md (this file)    └── Dockerfile

Creating the Dockerfile

Now in the base directory, a file is createdDockerfile``. TheDockerfile``` contain all the instructions required to build the docker image.

FROM frolvlad/alpine-miniconda3:python3.7COPY requirements.txt .RUN pip install -r requirements.txtEXPOSE 80COPY ./app /appCMD ["uvicorn","app.main:app","--host","0.0.0.0","--port","80"]

Base Image

TheFROM instruction allows to use a pre-existing image as base of our new docker image, instead of writing our docker image from the scratch. This allows the software in pre-existing image to be available in our new docker image.

In this casefrolvlad/alpine-miniconda3:python3.7 is used as base image.

  • it contains python 3.7
  • also contains an alpine version of linux, which is a distribution created to be very small in size.

Other existing images, can be used as base image of our new docker image, but size of those is a lot heavier. So using the one mentioned, as it a great image for required task.

Installing Dependencies

Now our docker image has environment with python installed, so the dependencies required for serving the inference requests need to be installed in our docker image.

The dependencies are written in requirements.txt file in our base dir. This file needs to be copied in our docker imageCOPY requirements.txt . and then the dependencies are installed byRUN pip install -r requirements.txt

Exposing the port

Our server will listen to inference requests on port 80.

EXPOSE 80

Copying our App into Docker image

Our app should be inside the docker image.

COPY ./app /app

Spinning up the server

Dockers are efficient at carrying out single task. When a docker container is run, theCMD commands get executed only once. This is the command which will start our server by specifying thehost andpost, when a docker container created from our docker image is started.

CMD ["uvicorn","app.main:app","--host","0.0.0.0","--port","80"]

Build the Docker Image

Now in base dir, the Dockerfile is present. The Docker image is built using thedocker build command:

docker build -t ml_pipeline:RFC

The-t flags is used for specifying thename:tag of docker image.

Run the Docker Container

Now that the docker image is created. To run a docker container out of it:

docker run -p 80:80 ml_pipeline:RFC

The-p 80:80 flag performs port mapping operations. The container and as well as local machine, has own set of ports. As our container is exposed on port 80, so it needs to be mapped to a port on local machine which is also 80.

Make Inference Requests to Dockerized Server

Now that our server is listening on port 80, aPOST request can be made for predicting the class of loan.

The requests should contain the data inJSON format.

{"LanguageCode":"estonian" ,"AppliedAmount":191.7349 ,"Amount":140.6057 ,"Interest" :25 ,"LoanDuration" :1,"EMI":3655.7482,"HomeOwnershipType":"owner","IncomeTotal" :1300.0,"LiabilitiesTotal" :0,"MonthlyPaymentDay":15,"Rating" :"f","Restructured" :"no","PrincipalPaymentsMade" :140.6057,"InterestAndPenaltyPaymentsMade" :2.0227,"PrincipalBalance" :0,"InterestAndPenaltyBalance" :0,"PreviousRepaymentsBeforeLoan" :258.6256,"Bids" :140.6057 }

FastAPI built-in Client

FastAPI has a built-in client to interact with the deployed server.

Usingcurl to send request

curl command can be used to send the inference request to deployed server.

curl -X POST http://localhost:80/predict \    -d @./loan-examples/1.json \    -H"Content-Type: application/json"

Three flags are used withcurl:-X: to specify the type of request likePOST-d: data to be sent with request-H: header to specify the type of data sent with request

The directoryloan-examples has 2 json files containing the loan samples for prediction, for testing the deployed dockerized server.

About

This project is focused on the Deployment phase of machine learning. The Docker and FastAPI are used to deploy a dockerized server of trained machine learning pipeline.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

[8]ページ先頭

©2009-2025 Movatter.jp