Movatterモバイル変換


[0]ホーム

URL:


Skip to content
DEV Community
Log in Create account

DEV Community

Cover image for Create and serve a simple machine learning model
Josias Aurel
Josias Aurel

Posted on • Originally published atjosiasdev.best

     

Create and serve a simple machine learning model

In this tutorial, we are going to create and deploy a simple machine learning model.
We are going to create the model using Python and the tensorflow library. We will finish up by serving the model in a flask application as an API(Application Programming Interface).

Let's get to it.

First create a directory/folder where you are going to put all the code from this tutorial.

The model we are going to create is a pretty simple one. Given an inputx, we want to get an output2x+1. Pretty simple! We want to create a model that will train on some sample data and come up with its own method to be able to find the right output.
Our sample data is pretty simple one.

sample_input=(1,2,3,4,5,6)sample_output=(3,5,7,9,11,13)
Enter fullscreen modeExit fullscreen mode

If you try using the formula on the input, you should get a corresponding output.

Now open the project folder in your favorite editor and let's get to coding.

Creating the model

Create a new file calledmodel.py and we are going ro write our model code in there.

First we are going to import tensorflow and keras.

importtensorflowastffromtensorflowimportkeras
Enter fullscreen modeExit fullscreen mode

Next is to create out model instance. Since it is simple one, a sequential model will be fine.

model=keras.Sequential([keras.layers.Dense(1,activation="relu",input_shape=[1])])
Enter fullscreen modeExit fullscreen mode

Next step is to compile our model and give it an optmizer as well as a loss function. The loss function will help to look back at how the model progresses from errors while rhe optmizer will help optmize the model.

model.compile(optimizer="sgd",loss="mean_squared_error")
Enter fullscreen modeExit fullscreen mode

Next comes training our model.

# train modelmodel.fit(sample_input,sample_output,epochs=500)
Enter fullscreen modeExit fullscreen mode

You should now end up with this

importtensorflowastffromtensorflowimportkerasmodel=keras.Sequential([keras.layers.Dense(1,activation="relu",input_shape=[1])])sample_input=(1,2,3,4,5,6)sample_output=(3,5,7,9,11,13)model.compile(optimizer="sgd",loss="mean_squared_error")# train modelmodel.fit(sample_input,sample_output,epochs=500)
Enter fullscreen modeExit fullscreen mode

Time to test our model. Add this line at the end of the file. You can pass any value in it to see if the model works. Make sure its in the form of a list.

print(model.predict([8]))# my output [[17.074446]]
Enter fullscreen modeExit fullscreen mode

After training the model, we try to make it predict on a sample value. I choosed to test on 8 as my sample and i got ~17. If we substitute in our formula, 2(8)+1 equal 17. This confirms that our model actually works.

Saving the model

After creating the model, it is good to save it so it can be deployed more easily.
Saving the model is as simple as using themodel.save() method.

Add this line at the end of your model.py file.

model.save(filepath="./")
Enter fullscreen modeExit fullscreen mode

The model.save() method takes afilepath keyword argument which is the path to which you want to save the model. I am saving my model in the root of the project.
You should see a file namedsaved_model.pb. This is our model and can now be shared or deployed.

Deploying the model

We are now going to create a simple flask API to serve our model.

Create a new file namedapp.py and add the follwing code in there.

fromtensorflow.kerasimportmodelsfromflaskimportFlask,requestapp=Flask(__name__)# load saved model from filesystemmodel=models.load_model(filepath="./")@app.route('/predict',methods=["POST"])defpredict():value=int(request.data)prediction=model.predict([value])# print(prediction)returnstr(prediction[0][0])if__name__=="__main__":app.run(debug=True)
Enter fullscreen modeExit fullscreen mode

First, we are importing models utility from keras and then Flask.
We can load a saved model using the models utility using the load_model method. It takes afilepath argument specifying the path to which the saved model is.

Next is we create a route which accpetPOST requests.
Inside the function, we get the data sent through the request. We will get bytes, so we cast it to an integer.
We then we used our loaded model to predict the output of the value sent. We then return our prediction. Notice we cast the response a string. This is because flask allows us to return only strings, tuples or dictionaries.

Testing our API

To test your API, we are going to write a simple python program to do so.
Open a file calledtest.py and add the following code in it.

importrequestsimportsysvalue=sys.argv[1]res=requests.post("http://localhost:5000/predict",value)print(res.content)
Enter fullscreen modeExit fullscreen mode

Testing it

python test.py 9
Enter fullscreen modeExit fullscreen mode

I getb'19.115118'. Yay it works!

You have reached the end of this tutorial.
I hope you enjoyed building and serving this simple model.

Feel free to play with the API with different values.

Final codehere

If you liked this, make sure to share andfollow me on twitter for more of such.


Other articles of mine that you might like

Top comments(0)

Subscribe
pic
Create template

Templates let you quickly answer FAQs or store snippets for re-use.

Dismiss

Are you sure you want to hide this comment? It will become hidden in your post, but will still be visible via the comment'spermalink.

For further actions, you may consider blocking this person and/orreporting abuse

Developer. JavaScript, Pythonista. Machine Learning enthusiast
  • Location
    Cameroon
  • Work
    Software development
  • Joined

More fromJosias Aurel

DEV Community

We're a place where coders share, stay up-to-date and grow their careers.

Log in Create account

[8]ページ先頭

©2009-2025 Movatter.jp