Making developers awesome at machine learning
Making developers awesome at machine learning

Your First OpenAI API Project in Python Step-By-Step
Image by Editor | ChatGPT
In arecent post, I demonstrated how to set up your first large language models (LLMs) that are downloaded and running with Ollama, a user-friendly framework supported by multiple platforms for running open models like Mistral, LlaMA, Gemma, and more. Within this project, we also implemented a lightweight REST API and web interface for user-model interaction, utilizing FastAPI.
Back by popular demand, we will now put together a similar step-by-step article which will navigate you through the process of creating another first-time project to use specialized LLMs, this time using theOpenAI API to access state-of-the-art models like GPT-4 instead of downloading and utilizing local models on your machine.
To get familiar or freshen up with familiar concepts and tools used in this article, here are some recommended or related readings:
And, of course, if you haven’t read the previous article in this series,Your First Local LLM API Project in Python Step-By-Step, you may want to do so now.
Still with me? OK, let’s do it.
In this tutorial, we will assume you have Python 3.9 or later installed on your machine, and that you have acquired a basic to intermediate understanding of the Python programming language. The code we will use can be implemented within a Python-friendly IDE like Visual Studio Code or similar — please note, due to the requirement of using several dependencies and the recommended use of a virtual environment, this may not be a suitable project to be run on a Google Colab instance or similar notebook environment in the cloud. Don’t worry, I will guide you through all of the necessary the setup steps.
To run your own OpenAI API project, you will need anAPI key. First, browse tothe OpenAI website and sign up or log in if you have already registered for the OpenAI platform. If you are signing up, you can do so with your email address, Google, or Microsoft account. Then go to settings (gear icon on the top-right corner) and “Billing” on the left-hand side menu. You should see something like this:

As of writing, getting reliable access to modern OpenAI models like GPT-4 requires a billing plan, either subscription or pay-as-you-go. During the process, you’ll be asked for your debit/credit card details and a billing address, and a simple account verification process will follow.
Next, go to theAPI keys page and create a new API key by clicking+ Create new secret key on the top-right corner. The generated key should be of the formatsk-XXXXX, whereXXXXX is a long alphanumeric string which may also contain some special characters like dashes and underscorees. You’ll need to copy and paste this API key later.
Assuming you are using VS Code (you may need to take slightly different actions if working with other IDEs), let’screate a new project folder in the file directory on the left-hand side, and call itopenai_api_hello or similar: as this is our first OpenAI API project, the “hello” as part of the project name stands for the ubiquitous “Hello World”!
Inside the folder, we will add two more files:main.py andrequirements.txt. We will leave the Python file empty for now, and add this content to therequirements.txt file:
1 2 3 4 | fastapi uvicorn openai python-dotenv |
These are the libraries and dependencies we will need to install in the virtual environment where the project will be run. Make sure you save the changes in the modified file.
Setting up a virtual environment is recommended when working with projects requiring multiple dependencies because it isolates them, prevents conflicts among library versions, and keeps some sense of order in your “development ecosystem.” You can set it up by:
requirements.txt created earlier and install the listed dependencies. This is very important, as we will need FastAPI, Uvicorn, OpenAI, and Python-dotenv for our Python project to work.If the last step does not work, try running this in the IDE’s terminal:
1 | pipinstallfastapiuvicornopenaipython-dotenv |
Time to populate the emptymain.py file we created before, by adding the following code:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | fromfastapiimportFastAPI,HTTPException frompydanticimportBaseModel importos fromopenaiimportOpenAI fromdotenvimportload_dotenv fromdotenvimportload_dotenv importos # IMPORTANT: PASTE YOUR OPENAI API KEY BETWEEN THE "" HERE: client=OpenAI(api_key="sk-...") app=FastAPI() classChatRequest(BaseModel): message:str @app.post("/chat") asyncdefchat(request:ChatRequest): try: completion=client.chat.completions.create( model="gpt-4", messages=[ {"role":"system","content":"You are a helpful assistant."}, {"role":"user","content":request.message} ] ) return{"response":completion.choices[0].message.content} exceptExceptionase: raiseHTTPException(status_code=500,detail=str(e)) @app.get("/") defroot(): return{"message":"Hello, World"} |
Before going further, let’s take a look at the different parts of the code. Trust me, it’s pretty simple:
First, the necessary libraries and modules are imported, along with theos native library. We don’t need it in this version of the code, but if this first attempt works, you might want to come back to your Python code and accessing your API key stored in an additional file within your project called.venv that contains it, instead of directly pasting the whole API key in the Python program. We will elaborate on this shortly.
Very importantly, in this line,
1 | client=OpenAI(api_key="sk-XXXXX") |
You must paste your previously generated OpenAI API key, specifically replacing thesk-XXXXX string with your actual key, keeping the quotation marks on both sides. Alternatively, if you want to have your API key stored in a separate file, create a file called.venv in your project folder, and add:
1 | OPENAI_API_KEY=<YOURKEYGOESHERE> |
To access the key using this approach, replace the following instruction:
1 | client=OpenAI(api_key="sk-XXXXX") |
With this one:
1 2 3 4 5 6 7 | load_dotenv(dotenv_path=os.path.join(os.path.dirname(__file__),".env")) api_key=os.getenv("OPENAI_API_KEY") print("Loaded API key:",api_key) # Useful for debugging and potential troubleshooting, remove if it works. ifnotapi_key: raiseRuntimeError("OPENAI_API_KEY not found in environment variables") |
If you just decided to keep your API key pasted in the Python file code, you don’t need to take the last couple of steps. Let’s look at more parts of the code:
1 2 3 4 | app=FastAPI() classChatRequest(BaseModel): message:str |
The first instruction creates an instance of FastAPI application based on a web API that serves requests. The second instruction defines a so-called Pydantic model that FastAPI utilizes to validate and parse inputs or requests in a smooth fashion, expecting the body of requests to be in JSON format. Themessage: str instruction adds a field called “message” to the body of the request, with its value being a string, where the user prompt for the LLM will be later specified.
The main function of the program is this one:
1 2 3 | @app.post("/chat") asyncdefchat(request:ChatRequest): (...) |
This is where the main logic of the API is defined, namely by creating an instance to access the GPT-4 model, sending inputs to it, and collecting responses.
Save the Python file, and click on the “Run” icon or runpython main.py in the terminal. You will also need to run the following command in the IDE terminal (for this, very important, make sure you are located in the right directory of your project and the virtual environment is activated):
1 | uvicornmain:app--reload--port8010 |
If you see a sequence of ‘INF0’ messages being output like this, that’s a good sign and your FastAPI server is now running:
1 2 3 4 5 6 | INFO: Willwatchforchangesinthesedirectories:['<PATH_IN_YOUR_MACHINE>/openai_api_hello'] INFO: Uvicornrunningonhttp://127.0.0.1:8010 (Press CTRL+C to quit) INFO: Startedreloaderprocess[98131]usingStatReload INFO: Startedserverprocess[98133] INFO: Waitingforapplicationstartup. INFO: Applicationstartupcomplete. |
The service is now accessible viahttp://127.0.0.1:8010/docs (here we used port 8010, but make sure you are using the right port for you). Open this URL on your browser, and if everything went correctly, you’ll see a FastAPI Docs Web interface like this:

We are about to the most exciting part of the process: testing our API project! In the interface displayed, click the arrow beside thePOST/generate box to unfold it, and click the “Try it out” button.
Here, you can enter a prompt of your choice to ask GPT-4 a question. You must do it in a dedicated JSON-like parameter value, as shown below, by replacing the default prompt:"string". For instance:

Once you click the “Execute” button, in a few seconds you may get the response by slightly scrolling down:

Well done, your API project is now starting the conversation with OpenAI’s GPT-4 model!
After this first project, you may wonder what to do next. Possible ideas for follow-up developments could include adding a frontend UI using technologies like React or Vue, or HTML combined with JavaScript, or even Streamlit. Improving error handling procedures is another logical approach, or trying out other OpenAI models like GPT-3.5 or o3.
This article illustrated in a step-by-step fashion how to set up and run your first local OpenAI API Project for using OpenAI state-of-the-art models like GPT-4, based on FastAPI for quick model inference through a Web-based interface. If you haven’t checked the related tutorial on building your first local LLM project (linked at the top of this article), don’t miss it!







Great writeup Iván. Noticed you said that the API key should be stored in a .venv file (which is actually a folder for your virtual environment files), instead of a .env file.
Welcome!
I'mJason Brownlee PhD
and Ihelp developers get results withmachine learning.
Read more
TheEBook Catalog is where
you'll find theReally Good stuff.