Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

CLI extension for the GitHub Models service

License

NotificationsYou must be signed in to change notification settings

github/gh-models

Repository files navigation

Use the GitHub Models service from the CLI!

Using

Prerequisites

The extension requires thegh CLI to be installed and in thePATH. The extension also requires the user have authenticated viagh auth.

Installing

After installing thegh CLI, from a command-line run:

gh extension install https://github.com/github/gh-models

Upgrading

If you've previously installed thegh models extension and want to update to the latest version, you can run this command:

gh extension upgrade github/gh-models

Examples

Listing models

gh models list

Example output:

ID                              DISPLAY NAMEai21-labs/ai21-jamba-1.5-large  AI21 Jamba 1.5 Largeopenai/gpt-4.1                  OpenAI GPT-4.1openai/gpt-4o-mini              OpenAI GPT-4o minicohere/cohere-command-r         Cohere Command Rdeepseek/deepseek-v3-0324       Deepseek-V3-0324

Use the value in the "ID" column when specifying the model on the command-line.

Running inference

REPL mode

Run the extension in REPL mode. This will prompt you for which model to use.

gh models run

In REPL mode, use/help to list available commands. Otherwise just type your prompt and hit ENTER to send to the model.

Single-shot mode

Run the extension in single-shot mode. This will print the model output and exit.

gh models run openai/gpt-4o-mini"why is the sky blue?"

Run the extension with output from a command. This uses single-shot mode.

cat README.md| gh models run openai/gpt-4o-mini"summarize this text"

Evaluating prompts

Run evaluation tests against a model using a.prompt.yml file:

gh modelseval my_prompt.prompt.yml

The evaluation will run test cases defined in the prompt file and display results in a human-readable format. For programmatic use, you can output results in JSON format:

gh modelseval my_prompt.prompt.yml --json

The JSON output includes detailed test results, evaluation scores, and summary statistics that can be processed by other tools or CI/CD pipelines.

Here's a sample GitHub Action that uses theeval command to automatically run the evals in any PR that updates a prompt file:evals_action.yml.

Learn more about.prompt.yml files here:Storing prompts in GitHub repositories.

Notice

Remember when interacting with a model you are experimenting with AI, so content mistakes are possible. The feature issubject to various limits (including requests per minute, requests per day, tokens per request, and concurrent requests)and is not designed for production use cases. GitHub Models usesAzure AI Content Safety. These filterscannot be turned off as part of the GitHub Models experience. If you decide to employ models through a paid service,please configure your content filters to meet your requirements. This service is underGitHub's Pre-release Terms. Youruse of the GitHub Models is subject to the followingProduct Terms andPrivacy Statement. Content within thisRepository may be subject to additional license terms.

About

CLI extension for the GitHub Models service

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors16

Languages


[8]ページ先頭

©2009-2025 Movatter.jp