Movatterモバイル変換


[0]ホーム

URL:


DocsGuidesBlog
Discord logoDiscord logo
Discord
GitHub logoGitHub logo

Intro

What is Bun?

Installation

Quickstart

TypeScript

Templating

bun init

bun create

Runtime

bun run

File types

TypeScript

JSX

Environment variables

Bun APIs

Web APIs

Node.js compatibility

Single-file executable

Plugins

Watch mode

Module resolution

Auto-install

bunfig.toml

Debugger

Framework APISOON

Package manager

bun install

bun add

bun remove

bun update

bun publish

bun outdated

bun link

bun pm

Global cache

Workspaces

Lifecycle scripts

Filter

Lockfile

Scopes and registries

Overrides and resolutions

Patch dependencies

.npmrc support

Bundler

Bun.build

HTML & static sites

CSS

Fullstack Dev Server

Hot reloading

Loaders

Plugins

Macros

vs esbuild

Test runner

bun test

Writing tests

Watch mode

Lifecycle hooks

Mocks

Snapshots

Dates and times

DOM testing

Code coverage

Package runner

bunx

API

HTTP server

HTTP client

WebSockets

Workers

Binary data

Streams

SQL

S3 Object Storage

File I/O

import.meta

SQLite

FileSystemRouter

TCP sockets

UDP sockets

Globals

$ Shell

Child processes

HTMLRewriter

Hashing

Console

Cookie

FFI

C Compiler

Testing

Utils

Node-API

Glob

DNS

Semver

Color

Transpiler

Project

Roadmap

Benchmarking

Contributing

Building Windows

Bindgen

License

Bun logoBunBun

Search the docs...

/

Intro

What is Bun?

Installation

Quickstart

TypeScript

Templating

bun init

bun create

Runtime

bun run

File types

TypeScript

JSX

Environment variables

Bun APIs

Web APIs

Node.js compatibility

Single-file executable

Plugins

Watch mode

Module resolution

Auto-install

bunfig.toml

Debugger

Framework APISOON

Package manager

bun install

bun add

bun remove

bun update

bun publish

bun outdated

bun link

bun pm

Global cache

Workspaces

Lifecycle scripts

Filter

Lockfile

Scopes and registries

Overrides and resolutions

Patch dependencies

.npmrc support

Bundler

Bun.build

HTML & static sites

CSS

Fullstack Dev Server

Hot reloading

Loaders

Plugins

Macros

vs esbuild

Test runner

bun test

Writing tests

Watch mode

Lifecycle hooks

Mocks

Snapshots

Dates and times

DOM testing

Code coverage

Package runner

bunx

API

HTTP server

HTTP client

WebSockets

Workers

Binary data

Streams

SQL

S3 Object Storage

Basic Usage
Bun.S3Client &Bun.s3
Working with S3 Files
Reading files from S3
Writing & uploading files to S3
Working with large files (streams)
Presigning URLs
Setting ACLs
Expiring URLs
method
new Response(S3File)
Support for S3-Compatible Services
Using Bun's S3Client with AWS S3
Using Bun's S3Client with Google Cloud Storage
Using Bun's S3Client with Cloudflare R2
Using Bun's S3Client with DigitalOcean Spaces
Using Bun's S3Client with MinIO
Using Bun's S3Client with supabase
Using Bun's S3Client with S3 Virtual Hosted-Style endpoints
Credentials
S3Client objects
S3Client.prototype.write
S3Client.prototype.delete
S3Client.prototype.exists
S3File
Partial reads withslice
Deleting files from S3
Error codes
S3Client static methods
S3Client.presign (static)
S3Client.exists (static)
S3Client.stat (static)
S3Client.delete (static)
s3:// protocol
UTF-8, UTF-16, and BOM (byte order mark)

File I/O

import.meta

SQLite

FileSystemRouter

TCP sockets

UDP sockets

Globals

$ Shell

Child processes

HTMLRewriter

Hashing

Console

Cookie

FFI

C Compiler

Testing

Utils

Node-API

Glob

DNS

Semver

Color

Transpiler

Project

Roadmap

Benchmarking

Contributing

Building Windows

Bindgen

License

S3 Object Storage

GitHub logoGitHub logo

Edit on GitHub

Production servers often read, upload, and write files to S3-compatible object storage services instead of the local filesystem. Historically, that means local filesystem APIs you use in development can't be used in production. When you use Bun, things are different.

Bun's S3 API is fast

Bun's S3 API is fast
Left: Bun v1.1.44. Right: Node.js v23.6.0

Bun provides fast, native bindings for interacting with S3-compatible object storage services. Bun's S3 API is designed to be simple and feel similar to fetch'sResponse andBlob APIs (like Bun's local filesystem APIs).

import { s3, write, S3Client }from"bun";// Bun.s3 reads environment variables for credentials// file() returns a lazy reference to a file on S3const metadata= s3.file("123.json");// Download from S3 as JSONconst data=await metadata.json();// Upload to S3awaitwrite(metadata,JSON.stringify({ name:"John", age:30 }));// Presign a URL (synchronous - no network request needed)const url= metadata.presign({  acl:"public-read",  expiresIn:60*60*24,// 1 day});// Delete the fileawait metadata.delete();

S3 is thede facto standard internet filesystem. Bun's S3 API works with S3-compatible storage services like:

  • AWS S3
  • Cloudflare R2
  • DigitalOcean Spaces
  • MinIO
  • Backblaze B2
  • ...and any other S3-compatible storage service

Basic Usage

There are several ways to interact with Bun's S3 API.

Bun.S3Client &Bun.s3

Bun.s3 is equivalent tonew Bun.S3Client(), relying on environment variables for credentials.

To explicitly set credentials, pass them to theBun.S3Client constructor.

import { S3Client }from"bun";const client=newS3Client({  accessKeyId:"your-access-key",  secretAccessKey:"your-secret-key",  bucket:"my-bucket",// sessionToken: "..."// acl: "public-read",// endpoint: "https://s3.us-east-1.amazonaws.com",// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2// endpoint: "https://<region>.digitaloceanspaces.com", // DigitalOcean Spaces// endpoint: "http://localhost:9000", // MinIO});// Bun.s3 is a global singleton that is equivalent to `new Bun.S3Client()`

Working with S3 Files

Thefile method inS3Client returns alazy reference to a file on S3.

// A lazy reference to a file on S3const s3file:S3File= client.file("123.json");

LikeBun.file(path), theS3Client'sfile method is synchronous. It does zero network requests until you call a method that depends on a network request.

Reading files from S3

If you've used thefetch API, you're familiar with theResponse andBlob APIs.S3File extendsBlob. The same methods that work onBlob also work onS3File.

// Read an S3File as textconst text=await s3file.text();// Read an S3File as JSONconst json=await s3file.json();// Read an S3File as an ArrayBufferconst buffer=await s3file.arrayBuffer();// Get only the first 1024 bytesconst partial=await s3file.slice(0,1024).text();// Stream the fileconst stream= s3file.stream();forawait (const chunkof stream) {  console.log(chunk);}

Memory optimization

Methods liketext(),json(),bytes(), orarrayBuffer() avoid duplicating the string or bytes in memory when possible.

If the text happens to be ASCII, Bun directly transfers the string to JavaScriptCore (the engine) without transcoding and without duplicating the string in memory. When you use.bytes() or.arrayBuffer(), it will also avoid duplicating the bytes in memory.

These helper methods not only simplify the API, they also make it faster.

Writing & uploading files to S3

Writing to S3 is just as simple.

// Write a string (replacing the file)await s3file.write("Hello World!");// Write a Buffer (replacing the file)await s3file.write(Buffer.from("Hello World!"));// Write a Response (replacing the file)await s3file.write(newResponse("Hello World!"));// Write with content typeawait s3file.write(JSON.stringify({ name:"John", age:30 }), {  type:"application/json",});// Write using a writer (streaming)const writer= s3file.writer({ type:"application/json" });writer.write("Hello");writer.write(" World!");await writer.end();// Write using Bun.writeawait Bun.write(s3file,"Hello World!");

Working with large files (streams)

Bun automatically handles multipart uploads for large files and provides streaming capabilities. The same API that works for local files also works for S3 files.

// Write a large fileconst bigFile= Buffer.alloc(10*1024*1024);// 10MBconst writer= s3file.writer({// Automatically retry on network errors up to 3 times  retry:3,// Queue up to 10 requests at a time  queueSize:10,// Upload in 5 MB chunks  partSize:5*1024*1024,});for (let i=0; i<10; i++) {await writer.write(bigFile);}await writer.end();

Presigning URLs

When your production service needs to let users upload files to your server, it's often more reliable for the user to upload directly to S3 instead of your server acting as an intermediary.

To facilitate this, you can presign URLs for S3 files. This generates a URL with a signature that allows a user to securely upload that specific file to S3, without exposing your credentials or granting them unnecessary access to your bucket.

The default behaviour is to generate aGET URL that expires in 24 hours. Bun attempts to infer the content type from the file extension. If inference is not possible, it will default toapplication/octet-stream.

import { s3 }from"bun";// Generate a presigned URL that expires in 24 hours (default)const download= s3.presign("my-file.txt");// GET, text/plain, expires in 24 hoursconst upload= s3.presign("my-file", {  expiresIn:3600,// 1 hour  method:"PUT",  type:"application/json",// No extension for inferring, so we can specify the content type to be JSON});// You can call .presign() if on a file reference, but avoid doing so// unless you already have a reference (to avoid memory usage).const myFile= s3.file("my-file.txt");const presignedFile= myFile.presign({  expiresIn:3600,// 1 hour});

Setting ACLs

To set an ACL (access control list) on a presigned URL, pass theacl option:

const url= s3file.presign({  acl:"public-read",  expiresIn:3600,});

You can pass any of the following ACLs:

ACLExplanation
"public-read"The object is readable by the public.
"private"The object is readable only by the bucket owner.
"public-read-write"The object is readable and writable by the public.
"authenticated-read"The object is readable by the bucket owner and authenticated users.
"aws-exec-read"The object is readable by the AWS account that made the request.
"bucket-owner-read"The object is readable by the bucket owner.
"bucket-owner-full-control"The object is readable and writable by the bucket owner.
"log-delivery-write"The object is writable by AWS services used for log delivery.

Expiring URLs

To set an expiration time for a presigned URL, pass theexpiresIn option.

const url= s3file.presign({// Seconds  expiresIn:3600,// 1 hour// access control list  acl:"public-read",// HTTP method  method:"PUT",});

method

To set the HTTP method for a presigned URL, pass themethod option.

const url= s3file.presign({  method:"PUT",// method: "DELETE",// method: "GET",// method: "HEAD",// method: "POST",// method: "PUT",});

new Response(S3File)

To quickly redirect users to a presigned URL for an S3 file, pass anS3File instance to aResponse object as the body.

const response=newResponse(s3file);console.log(response);

This will automatically redirect the user to the presigned URL for the S3 file, saving you the memory, time, and bandwidth cost of downloading the file to your server and sending it back to the user.

Response (0 KB) {  ok:false,  url:"",  status:302,  statusText:"",  headers: Headers {"location":"https://<account-id>.r2.cloudflarestorage.com/...",  },  redirected:true,  bodyUsed:false}

Support for S3-Compatible Services

Bun's S3 implementation works with any S3-compatible storage service. Just specify the appropriate endpoint:

Using Bun's S3Client with AWS S3

AWS S3 is the default. You can also pass aregion option instead of anendpoint option for AWS S3.

import { S3Client }from"bun";// AWS S3const s3=newS3Client({  accessKeyId:"access-key",  secretAccessKey:"secret-key",  bucket:"my-bucket",// endpoint: "https://s3.us-east-1.amazonaws.com",// region: "us-east-1",});

Using Bun's S3Client with Google Cloud Storage

To use Bun's S3 client withGoogle Cloud Storage, setendpoint to"https://storage.googleapis.com" in theS3Client constructor.

import { S3Client }from"bun";// Google Cloud Storageconst gcs=newS3Client({  accessKeyId:"access-key",  secretAccessKey:"secret-key",  bucket:"my-bucket",  endpoint:"https://storage.googleapis.com",});

Using Bun's S3Client with Cloudflare R2

To use Bun's S3 client withCloudflare R2, setendpoint to the R2 endpoint in theS3Client constructor. The R2 endpoint includes your account ID.

import { S3Client }from"bun";// CloudFlare R2const r2=newS3Client({  accessKeyId:"access-key",  secretAccessKey:"secret-key",  bucket:"my-bucket",  endpoint:"https://<account-id>.r2.cloudflarestorage.com",});

Using Bun's S3Client with DigitalOcean Spaces

To use Bun's S3 client withDigitalOcean Spaces, setendpoint to the DigitalOcean Spaces endpoint in theS3Client constructor.

import { S3Client }from"bun";const spaces=newS3Client({  accessKeyId:"access-key",  secretAccessKey:"secret-key",  bucket:"my-bucket",// region: "nyc3",  endpoint:"https://<region>.digitaloceanspaces.com",});

Using Bun's S3Client with MinIO

To use Bun's S3 client withMinIO, setendpoint to the URL that MinIO is running on in theS3Client constructor.

import { S3Client }from"bun";const minio=newS3Client({  accessKeyId:"access-key",  secretAccessKey:"secret-key",  bucket:"my-bucket",// Make sure to use the correct endpoint URL// It might not be localhost in production!  endpoint:"http://localhost:9000",});

Using Bun's S3Client with supabase

To use Bun's S3 client withsupabase, setendpoint to the supabase endpoint in theS3Client constructor. The supabase endpoint includes your account ID and /storage/v1/s3 path. Make sure to set Enable connection via S3 protocol on in the supabase dashboard in https://supabase.com/dashboard/project/<account-id>/settings/storage and to set the region informed in the same section.

import { S3Client }from"bun";const supabase=newS3Client({  accessKeyId:"access-key",  secretAccessKey:"secret-key",  bucket:"my-bucket",  region:"us-west-1",  endpoint:"https://<account-id>.supabase.co/storage/v1/s3/storage",});

Using Bun's S3Client with S3 Virtual Hosted-Style endpoints

When using a S3 Virtual Hosted-Style endpoint, you need to set thevirtualHostedStyle option totrue and if no endpoint is provided, Bun will use region and bucket to infer the endpoint to AWS S3, if no region is provided it will useus-east-1. If you provide a the endpoint, there are no need to provide the bucket name.

import { S3Client }from"bun";// AWS S3 endpoint inferred from region and bucketconst s3=newS3Client({  accessKeyId:"access-key",  secretAccessKey:"secret-key",  bucket:"my-bucket",  virtualHostedStyle:true,// endpoint: "https://my-bucket.s3.us-east-1.amazonaws.com",// region: "us-east-1",});// AWS S3const s3WithEndpoint=newS3Client({  accessKeyId:"access-key",  secretAccessKey:"secret-key",  endpoint:"https://<bucket-name>.s3.<region>.amazonaws.com",  virtualHostedStyle:true,});// Cloudflare R2const r2WithEndpoint=newS3Client({  accessKeyId:"access-key",  secretAccessKey:"secret-key",  endpoint:"https://<bucket-name>.<account-id>.r2.cloudflarestorage.com",  virtualHostedStyle:true,});

Credentials

Credentials are one of the hardest parts of using S3, and we've tried to make it as easy as possible. By default, Bun reads the following environment variables for credentials.

Option nameEnvironment variable
accessKeyIdS3_ACCESS_KEY_ID
secretAccessKeyS3_SECRET_ACCESS_KEY
regionS3_REGION
endpointS3_ENDPOINT
bucketS3_BUCKET
sessionTokenS3_SESSION_TOKEN

If theS3_* environment variable is not set, Bun will also check for theAWS_* environment variable, for each of the above options.

Option nameFallback environment variable
accessKeyIdAWS_ACCESS_KEY_ID
secretAccessKeyAWS_SECRET_ACCESS_KEY
regionAWS_REGION
endpointAWS_ENDPOINT
bucketAWS_BUCKET
sessionTokenAWS_SESSION_TOKEN

These environment variables are read from.env files or from the process environment at initialization time (process.env is not used for this).

These defaults are overridden by the options you pass tos3.file(credentials),new Bun.S3Client(credentials), or any of the methods that accept credentials. So if, for example, you use the same credentials for different buckets, you can set the credentials once in your.env file and then passbucket: "my-bucket" to thes3.file() function without having to specify all the credentials again.

S3Client objects

When you're not using environment variables or using multiple buckets, you can create aS3Client object to explicitly set credentials.

import { S3Client }from"bun";const client=newS3Client({  accessKeyId:"your-access-key",  secretAccessKey:"your-secret-key",  bucket:"my-bucket",// sessionToken: "..."  endpoint:"https://s3.us-east-1.amazonaws.com",// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2// endpoint: "http://localhost:9000", // MinIO});// Write using a Responseawait file.write(newResponse("Hello World!"));// Presign a URLconst url= file.presign({  expiresIn:60*60*24,// 1 day  acl:"public-read",});// Delete the fileawait file.delete();

S3Client.prototype.write

To upload or write a file to S3, callwrite on theS3Client instance.

const client=new Bun.S3Client({  accessKeyId:"your-access-key",  secretAccessKey:"your-secret-key",  endpoint:"https://s3.us-east-1.amazonaws.com",  bucket:"my-bucket",});await client.write("my-file.txt","Hello World!");await client.write("my-file.txt",newResponse("Hello World!"));// equivalent to// await client.file("my-file.txt").write("Hello World!");

S3Client.prototype.delete

To delete a file from S3, calldelete on theS3Client instance.

const client=new Bun.S3Client({  accessKeyId:"your-access-key",  secretAccessKey:"your-secret-key",  bucket:"my-bucket",});await client.delete("my-file.txt");// equivalent to// await client.file("my-file.txt").delete();

S3Client.prototype.exists

To check if a file exists in S3, callexists on theS3Client instance.

const client=new Bun.S3Client({  accessKeyId:"your-access-key",  secretAccessKey:"your-secret-key",  bucket:"my-bucket",});const exists=await client.exists("my-file.txt");// equivalent to// const exists = await client.file("my-file.txt").exists();

S3File

S3File instances are created by calling theS3Client instance method or thes3.file() function. LikeBun.file(),S3File instances are lazy. They don't refer to something that necessarily exists at the time of creation. That's why all the methods that don't involve network requests are fully synchronous.

interfaceS3FileextendsBlob {slice(start:number,end?:number):S3File;exists():Promise<boolean>;unlink():Promise<void>;presign(options:S3Options):string;text():Promise<string>;json():Promise<any>;bytes():Promise<Uint8Array>;arrayBuffer():Promise<ArrayBuffer>;stream(options:S3Options):ReadableStream;write(data:|string|Uint8Array|ArrayBuffer|Blob|ReadableStream|Response|Request,options?:BlobPropertyBag,  ):Promise<number>;exists(options?:S3Options):Promise<boolean>;unlink(options?:S3Options):Promise<void>;delete(options?:S3Options):Promise<void>;presign(options?:S3Options):string;stat(options?:S3Options):Promise<S3Stat>;/**   * Size is not synchronously available because it requires a network request.   *   *@deprecated Use `stat()` instead.   */  size:NaN;// ... more omitted for brevity}

LikeBun.file(),S3File extendsBlob, so all the methods that are available onBlob are also available onS3File. The same API for reading data from a local file is also available for reading data from S3.

MethodOutput
await s3File.text()string
await s3File.bytes()Uint8Array
await s3File.json()JSON
await s3File.stream()ReadableStream
await s3File.arrayBuffer()ArrayBuffer

That means usingS3File instances withfetch(),Response, and other web APIs that acceptBlob instances just works.

Partial reads withslice

To read a partial range of a file, you can use theslice method.

const partial= s3file.slice(0,1024);// Read the partial range as a Uint8Arrayconst bytes=await partial.bytes();// Read the partial range as a stringconst text=await partial.text();

Internally, this works by using the HTTPRange header to request only the bytes you want. Thisslice method is the same asBlob.prototype.slice.

Deleting files from S3

To delete a file from S3, you can use thedelete method.

await s3file.delete();// await s3File.unlink();

delete is the same asunlink.

Error codes

When Bun's S3 API throws an error, it will have acode property that matches one of the following values:

  • ERR_S3_MISSING_CREDENTIALS
  • ERR_S3_INVALID_METHOD
  • ERR_S3_INVALID_PATH
  • ERR_S3_INVALID_ENDPOINT
  • ERR_S3_INVALID_SIGNATURE
  • ERR_S3_INVALID_SESSION_TOKEN

When the S3 Object Storage service returns an error (that is, not Bun), it will be anS3Error instance (anError instance with the name"S3Error").

S3Client static methods

TheS3Client class provides several static methods for interacting with S3.

S3Client.presign (static)

To generate a presigned URL for an S3 file, you can use theS3Client.presign static method.

import { S3Client }from"bun";const credentials= {  accessKeyId:"your-access-key",  secretAccessKey:"your-secret-key",  bucket:"my-bucket",// endpoint: "https://s3.us-east-1.amazonaws.com",// endpoint: "https://<account-id>.r2.cloudflarestorage.com", // Cloudflare R2};const url= S3Client.presign("my-file.txt", {...credentials,  expiresIn:3600,});

This is equivalent to callingnew S3Client(credentials).presign("my-file.txt", { expiresIn: 3600 }).

S3Client.exists (static)

To check if an S3 file exists, you can use theS3Client.exists static method.

import { S3Client }from"bun";const credentials= {  accessKeyId:"your-access-key",  secretAccessKey:"your-secret-key",  bucket:"my-bucket",// endpoint: "https://s3.us-east-1.amazonaws.com",};const exists=await S3Client.exists("my-file.txt", credentials);

The same method also works onS3File instances.

import { s3 }from"bun";const s3file= s3.file("my-file.txt", {...credentials,});const exists=await s3file.exists();

S3Client.stat (static)

To get the size, etag, and other metadata of an S3 file, you can use theS3Client.stat static method.

import { S3Client }from"bun";const credentials= {  accessKeyId:"your-access-key",  secretAccessKey:"your-secret-key",  bucket:"my-bucket",// endpoint: "https://s3.us-east-1.amazonaws.com",};const stat=await S3Client.stat("my-file.txt", credentials);// {//   etag: "\"7a30b741503c0b461cc14157e2df4ad8\"",//   lastModified: 2025-01-07T00:19:10.000Z,//   size: 1024,//   type: "text/plain;charset=utf-8",// }

S3Client.delete (static)

To delete an S3 file, you can use theS3Client.delete static method.

import { S3Client }from"bun";const credentials= {  accessKeyId:"your-access-key",  secretAccessKey:"your-secret-key",  bucket:"my-bucket",// endpoint: "https://s3.us-east-1.amazonaws.com",};await S3Client.delete("my-file.txt", credentials);// equivalent to// await new S3Client(credentials).delete("my-file.txt");// S3Client.unlink is alias of S3Client.deleteawait S3Client.unlink("my-file.txt", credentials);

s3:// protocol

To make it easier to use the same code for local files and S3 files, thes3:// protocol is supported infetch andBun.file().

const response=awaitfetch("s3://my-bucket/my-file.txt");const file= Bun.file("s3://my-bucket/my-file.txt");

You can additionally passs3 options to thefetch andBun.file functions.

const response=awaitfetch("s3://my-bucket/my-file.txt", {  s3: {    accessKeyId:"your-access-key",    secretAccessKey:"your-secret-key",    endpoint:"https://s3.us-east-1.amazonaws.com",  },  headers: {"range":"bytes=0-1023",  },});

UTF-8, UTF-16, and BOM (byte order mark)

LikeResponse andBlob,S3File assumes UTF-8 encoding by default.

When calling one of thetext() orjson() methods on anS3File:

  • When a UTF-16 byte order mark (BOM) is detected, it will be treated as UTF-16. JavaScriptCore natively supports UTF-16, so it skips the UTF-8 transcoding process (and strips the BOM). This is mostly good, but it does mean if you have invalid surrogate pairs characters in your UTF-16 string, they will be passed through to JavaScriptCore (same as source code).
  • When a UTF-8 BOM is detected, it gets stripped before the string is passed to JavaScriptCore and invalid UTF-8 codepoints are replaced with the Unicode replacement character (\uFFFD).
  • UTF-32 is not supported.

Previous

SQL

Next

File I/O

GitHub logoGitHub logo

Edit on GitHub


[8]ページ先頭

©2009-2025 Movatter.jp