Quickstart for GitHub Models
Run your first model with GitHub Models in minutes.
In this article
Introduction
GitHub Models is an AI inference API from GitHub that lets you run AI models using just your GitHub credentials. You can choose from many different models—including from OpenAI, Meta, and DeepSeek—and use them in scripts, apps, or even GitHub Actions, with no separate authentication process.
This guide helps you try out models quickly in the playground, then shows you how to run your first model via API or workflow.
Step 1: Try models in the playground
In the playground, select at least one model from the dropdown menu.
Test out different prompts using theChat view, and compare responses from different models.
Use theParameters view to customize the parameters for the models you are testing, then see how they impact responses.
Note
The playground works out of the box if you're signed in to GitHub. It uses your GitHub account for access—no setup or API keys required.
Step 2: Make an API call
For full details on available fields, headers, and request formats, see theAPI reference for GitHub Models.
To call models programmatically, you’ll need:
- A GitHub account.
- A personal access token (PAT) with the
modelsscope, which you can createin settings.
Run the following
curlcommand, replacingYOUR_GITHUB_PATwith your token.Bash curl -L \ -X POST \ -H "Accept: application/vnd.github+json" \ -H "Authorization: Bearer YOUR_GITHUB_PAT" \ -H "X-GitHub-Api-Version: 2022-11-28" \ -H "Content-Type: application/json" \ https://models.github.ai/inference/chat/completions \ -d '{"model":"openai/gpt-4.1","messages":[{"role":"user","content":"What is the capital of France?"}]}'curl -L \ -X POST \ -H"Accept: application/vnd.github+json" \ -H"Authorization: Bearer YOUR_GITHUB_PAT" \ -H"X-GitHub-Api-Version: 2022-11-28" \ -H"Content-Type: application/json" \ https://models.github.ai/inference/chat/completions \ -d'{"model":"openai/gpt-4.1","messages":[{"role":"user","content":"What is the capital of France?"}]}'You’ll receive a response like this:
{"choices":[{"message":{"role":"assistant","content":"The capital of France is **Paris**."}}], ...other fields omitted}To try other models, change the value of the
modelfield in the JSON payload to one from themarketplace.
Step 3: Run models in GitHub Actions
In your repository, create a workflow file at
.github/workflows/models-demo.yml.Paste the following workflow into the file you just created.
YAML name: Use GitHub Modelson: [push]permissions: models: readjobs: call-model: runs-on: ubuntu-latest steps: - name: Call AI model env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | curl "https://models.github.ai/inference/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $GITHUB_TOKEN" \ -d '{ "messages": [ { "role": "user", "content": "Explain the concept of recursion." } ], "model": "openai/gpt-4o" }'name:UseGitHubModelson: [push]permissions:models:readjobs:call-model:runs-on:ubuntu-lateststeps:-name:CallAImodelenv:GITHUB_TOKEN:${{secrets.GITHUB_TOKEN}}run:| curl "https://models.github.ai/inference/chat/completions" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $GITHUB_TOKEN" \ -d '{ "messages": [ { "role": "user", "content": "Explain the concept of recursion." } ], "model": "openai/gpt-4o" }'Note
Workflows that call GitHub Models must include
models: readin the permissions block. GitHub-hosted runners provide aGITHUB_TOKENautomatically.Commit and push to trigger the workflow.
This example shows how to send a prompt to a model and use the response in your continuous integration (CI) workflows. For more advanced use cases, such as summarizing issues, detecting missing reproduction steps for bug reports, or responding to pull requests, seeIntegrating AI models into your development workflow.
Step 4: Save your first prompt file
GitHub Models supports reusable prompts defined in.prompt.yml files. Once you add this file to your repository, it will appear in the Models page of your repository and can be run directly in the Prompt Editor and evaluation tooling. Learn more aboutStoring prompts in GitHub repositories.
In your repository, create a file named
summarize.prompt.yml. You can save it in any directory.Paste the following example prompt into the file you just created.
YAML name: Text Summarizerdescription: Summarizes input text conciselymodel: openai/gpt-4o-minimodelParameters: temperature: 0.5messages: - role: system content: You are a text summarizer. Your only job is to summarize text given to you. - role: user content: | Summarize the given text, beginning with "Summary -": <text> {{input}} </text>name:TextSummarizerdescription:Summarizesinputtextconciselymodel:openai/gpt-4o-minimodelParameters:temperature:0.5messages:-role:systemcontent:Youareatextsummarizer.Youronlyjobistosummarizetextgiventoyou.-role:usercontent:| Summarize the given text, beginning with "Summary -": <text> {{input}} </text>Commit and push the file to your repository.
Go to theModels tab in your repository.
In the navigation menu, click Prompts, then click on the prompt file.
The prompt will open in the prompt editor. ClickRun. A right-hand sidebar will appear asking you to enter input text. Enter any input text, then clickRun again in the bottom right corner to test it out.
Note
The prompt editor doesn’t automatically pass repository content into prompts. You provide the input manually.
Step 5: Set up your first evaluation
Evaluations help you measure how different models respond to the same inputs so you can choose the best one for your use case.
Go back to the
summarize.prompt.ymlfile you created in the previous step.Update the file to match the following example.
YAML name: Text Summarizerdescription: Summarizes input text conciselymodel: openai/gpt-4o-minimodelParameters: temperature: 0.5messages: - role: system content: You are a text summarizer. Your only job is to summarize text given to you. - role: user content: | Summarize the given text, beginning with "Summary -": <text> {{input}} </text>testData: - input: | The quick brown fox jumped over the lazy dog. The dog was too tired to react. expected: Summary - A fox jumped over a lazy, unresponsive dog. - input: | The museum opened a new dinosaur exhibit this weekend. Families from all over the city came to see the life-sized fossils and interactive displays. expected: Summary - The museum's new dinosaur exhibit attracted many families with its fossils and interactive displays.evaluators: - name: Output should start with 'Summary -' string: startsWith: 'Summary -' - name: Similarity uses: github/similarityname:TextSummarizerdescription:Summarizesinputtextconciselymodel:openai/gpt-4o-minimodelParameters:temperature:0.5messages:-role:systemcontent:Youareatextsummarizer.Youronlyjobistosummarizetextgiventoyou.-role:usercontent:| Summarize the given text, beginning with "Summary -": <text> {{input}} </text>testData:-input:| The quick brown fox jumped over the lazy dog. The dog was too tired to react.expected:Summary-Afoxjumpedoveralazy,unresponsivedog.-input:| The museum opened a new dinosaur exhibit this weekend. Families from all over the city came to see the life-sized fossils and interactive displays.expected:Summary-Themuseum'snewdinosaurexhibitattractedmanyfamilieswithitsfossilsandinteractivedisplays.evaluators:-name:Outputshouldstartwith'Summary -'string:startsWith:'Summary -'-name:Similarityuses:github/similarityCommit and push the file to your repository.
In your repository, click theModels tab. Then click Prompts and reopen the same prompt in the prompt editor.
In the top left-hand corner, you can toggle the view fromEdit toCompare. ClickCompare.
Your evaluation will be set up automatically. ClickRun to see results.
Tip
By clickingAdd prompt, you can run the same prompt with different models or change the prompt wording to get inference responses with multiple variations at once, see evaluations, and view them side by side to make data-driven model decisions.