Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Ultrafast serverless GPU inference, sandboxes, and background jobs

License

NotificationsYou must be signed in to change notification settings

beam-cloud/beta9

LogoLogo

Run AI Workloads at Scale

Colab⭐ Star the RepoDocumentationJoin SlackTwitterAGPL

Beam is a fast, open-source runtime for serverless AI workloads. It gives you a Pythonic interface to deploy and scale AI applications with zero infrastructure overhead.

Watch the demo

✨ Features

  • Fast Image Builds: Launch containers in under a second using a custom container runtime
  • Parallelization and Concurrency: Fan out workloads to 100s of containers
  • First-Class Developer Experience: Hot-reloading, webhooks, and scheduled jobs
  • Scale-to-Zero: Workloads are serverless by default
  • Volume Storage: Mount distributed storage volumes
  • GPU Support: Run on our cloud (4090s, H100s, and more) or bring your own GPUs

📦 Installation

pip install beam-client

⚡️ Quickstart

  1. Create an accounthere
  2. Follow ourGetting Started Guide

Creating a sandbox

Spin up isolated containers to run LLM-generated code:

frombeamimportImage,Sandboxsandbox=Sandbox(image=Image()).create()response=sandbox.process.run_code("print('I am running remotely')")print(response.result)

Deploy a serverless inference endpoint

Create an autoscaling endpoint for your custom model:

frombeamimportImage,endpointfrombeamimportQueueDepthAutoscaler@endpoint(image=Image(python_version="python3.11"),gpu="A10G",cpu=2,memory="16Gi",autoscaler=QueueDepthAutoscaler(max_containers=5,tasks_per_container=30))defhandler():return {"label":"cat","confidence":0.97}

Run background tasks

Schedule resilient background tasks (or replace your Celery queue) by adding a simple decorator:

frombeamimportImage,TaskPolicy,schema,task_queueclassInput(schema.Schema):image_url=schema.String()@task_queue(name="image-processor",image=Image(python_version="python3.11"),cpu=1,memory=1024,inputs=Input,task_policy=TaskPolicy(max_retries=3),)defmy_background_task(input:Input,*,context):image_url=input.image_urlprint(f"Processing image:{image_url}")return {"image_url":image_url}if__name__=="__main__":# Invoke a background task from your app (without deploying it)my_background_task.put(image_url="https://example.com/image.jpg")# You can also deploy this behind a versioned endpoint with:# beam deploy app.py:my_background_task --name image-processor

Self-Hosting vs Cloud

Beta9 is the open-source engine poweringBeam, our fully-managed cloud platform. You can self-host Beta9 for free or choose managed cloud hosting through Beam.

👋 Contributing

We welcome contributions big or small. These are the most helpful things for us:

❤️ Thanks to Our Contributors

Packages

No packages published

Contributors20


[8]ページ先頭

©2009-2025 Movatter.jp