Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Apple On-Device OpenAI API.

License

NotificationsYou must be signed in to change notification settings

tanu360/apple-intelligence-api

Repository files navigation

Apple Intelligence Logo

Apple On-Device OpenAI API Unified Server

OpenAI-compatible API server powered by Apple’s on-device Foundation Models

FeaturesRequirementsBuild & InstallUsageLicenseReferences


🌟 Overview

A SwiftUI app that runs an OpenAI-compatible API server using Apple’s on-device Foundation Models, unifyingbase,deterministic, andcreative variants under one endpoint for local use.


📸 App Screenshots

Light Mode Dashboard

Main Dashboard - Light Theme

Dark Mode Dashboard

Main Dashboard - Dark Theme

chat Interface

Chat Interface


🚀 Features

FeatureDescription
🔁OpenAI CompatibilityDrop-in replacement for OpenAI API with/chat/completions endpoint
Streaming SupportReal-time responses via OpenAI streaming format
💻On-Device ProcessingUses Apple Foundation Models — no external servers, fully local
Availability CheckAutomatic Apple Intelligence availability check on startup

🛠️ Requirements

  • macOS: 26 or greater
  • Apple Intelligence: Must be enabled inSettings → Apple Intelligence & Siri
  • Xcode: 26 or greater(must match OS version for building)

📦 Building and Installation

Prerequisites

  • macOS 26
  • Xcode 26
  • Apple Intelligence enabled

Build Steps

  1. Clone the repository
  2. OpenAppleIntelligenceAPI.xcodeproj in Xcode
  3. Select your development team in project settings
  4. Build and run the project (⌘+R)
  5. The app will launch and start the server

❓ Why a GUI App Instead of CLI?

Apple applies differentrate limiting policies to Foundation Models:

“An app with UI in the foreground has no rate limit. A macOS CLI tool without UI is rate-limited.”
— Apple DTS Engineer (source)

⚠️Note: You may still encounter limits due to current FoundationModels constraints. If that happens, restart the server.


📖 Usage

Starting the Server

  1. Launch the app
  2. Configure server settings(default:127.0.0.1:11435)
  3. ClickStart Server
  4. All three models will be served under OpenAI-compatible endpoints

📡 Available Endpoints

  • GET /status → Model availability & status
  • GET /v1/models → List models
  • POST /v1/chat/completions → Chat completions (supports streaming)

💡 Example Usage

Usingcurl

# Englishcurl -X POST http://127.0.0.1:11435/v1/chat/completions   -H"Content-Type: application/json"   -d'{    "model": "apple-fm-base",    "messages": [{"role": "user", "content": "Hello, how are you?"}],    "temperature": 0.7,    "stream": false  }'# Frenchcurl -X POST http://127.0.0.1:11435/v1/chat/completions   -H"Content-Type: application/json"   -d'{    "model": "apple-fm-base",    "messages": [{"role": "user", "content": "Bonjour, comment allez-vous?"}],    "stream": false  }'# Italiancurl -X POST http://127.0.0.1:11435/v1/chat/completions   -H"Content-Type: application/json"   -d'{    "model": "apple-fm-base",    "messages": [{"role": "user", "content": "Ciao, come stai?"}],    "stream": false  }'

Using OpenAI Python Client

fromopenaiimportOpenAIclient=OpenAI(base_url="http://127.0.0.1:11435/v1",api_key="not-needed")# --- English (streaming example) ---print("🔹 English:")stream=client.chat.completions.create(model="apple-fm-base",messages=[{"role":"user","content":"Hello, how are you?"}],temperature=0.7,stream=True,)forchunkinstream:ifchunk.choices[0].delta.content:print(chunk.choices[0].delta.content,end="")print("\n")# --- French (non-streaming example) ---print("🔹 French:")resp_fr=client.chat.completions.create(model="apple-fm-base",messages=[{"role":"user","content":"Bonjour, comment allez-vous?"}],stream=False,)print(resp_fr.choices[0].message.content)print()# --- Italian (non-streaming example) ---print("🔹 Italian:")resp_it=client.chat.completions.create(model="apple-fm-base",messages=[{"role":"user","content":"Ciao, come stai?"}],stream=False,)print(resp_it.choices[0].message.content)

📜 License

This project is licensed under theMIT License — seeLICENSE.


📚 References


🙏 Credits

This project is a fork and modification ofgety-ai/apple-on-device-openai.


Built with 🍎 + ❤️ by the open-source community


[8]ページ先頭

©2009-2026 Movatter.jp