Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

deploying an ML model to Heroku with FastAPI

License

NotificationsYou must be signed in to change notification settings

testdrivenio/fastapi-ml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Want to learn how to build this?

Check out thetutorial.

Want to use this project?

With Docker

  1. Build and tag the Docker image:

    $ docker build -t fastapi-prophet.
  2. Spin up the container:

    $ docker run --name fastapi-ml -e PORT=8008 -p 8008:8008 -d fastapi-prophet:latest
  3. Train the model:

    $ dockerexec -it fastapi-ml python>>> from model import train, predict, convert>>>train()
  4. Test:

    $ curl \  --header"Content-Type: application/json" \  --request POST \  --data'{"ticker":"MSFT"}' \  http://localhost:8008/predict

Without Docker

  1. Create and activate a virtual environment:

    $ python3 -m venv venv&&source venv/bin/activate
  2. Install the requirements:

    (venv)$ pip install -r requirements.txt
  3. Train the model:

    (venv)$ python>>> from model import train, predict, convert>>>train()
  4. Run the app:

    (venv)$ uvicorn main:app --reload --workers 1 --host 0.0.0.0 --port 8008
  5. Test:

    $ curl \  --header"Content-Type: application/json" \  --request POST \  --data'{"ticker":"MSFT"}' \  http://localhost:8008/predict

About

deploying an ML model to Heroku with FastAPI

Topics

Resources

License

Stars

Watchers

Forks


[8]ページ先頭

©2009-2025 Movatter.jp