Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Storage library provides a universal interface for accessing and manipulating data in different cloud blob storage providers

License

NotificationsYou must be signed in to change notification settings

managedcode/Storage

Repository files navigation

ManagedCode.Storage logo

ManagedCode.Storage

build-and-testDocsReleaseCodeQLCodecovQuality Gate StatusCoverageMCAF.NETLicense: MITNuGet

Cross-provider blob storage toolkit for .NET and ASP.NET streaming scenarios.

Documentation

  • Published docs (GitHub Pages):https://storage.managed-code.com/
  • Source docs live indocs/:
    • Setup:docs/Development/setup.md
    • Credentials (OneDrive/Google Drive/Dropbox/CloudKit):docs/Development/credentials.md
    • Testing strategy:docs/Testing/strategy.md
    • Feature docs:docs/Features/index.md
    • ADRs:docs/ADR/index.md
    • API (HTTP + SignalR):docs/API/storage-server.md
  • Diagrams are Mermaid-based and are expected to render on GitHub and the docs site.

Table of Contents

Quickstart

1) Install a provider package

dotnet add package ManagedCode.Storage.FileSystem

2) Register as defaultIStorage

usingManagedCode.Storage.Core;usingManagedCode.Storage.FileSystem.Extensions;varbuilder=WebApplication.CreateBuilder(args);builder.Services.AddFileSystemStorageAsDefault(options=>{options.BaseFolder=Path.Combine(builder.Environment.ContentRootPath,"storage");});

3) UseIStorage

usingManagedCode.Storage.Core;publicsealedclassMyService(IStoragestorage){publicTaskUploadAsync(CancellationTokenct)=>storage.UploadAsync("hello", options=>options.FileName="hello.txt",ct);}

4) (Optional) Expose HTTP + SignalR endpoints

usingManagedCode.Storage.Server.Extensions.DependencyInjection;usingManagedCode.Storage.Server.Extensions;builder.Services.AddControllers();builder.Services.AddStorageServer();builder.Services.AddStorageSignalR();// optionalvarapp=builder.Build();app.MapControllers();// /api/storage/*app.MapStorageHub();// /hubs/storage

ManagedCode.Storage wraps vendor SDKs behind a singleIStorage abstraction so uploads, downloads, metadata, streaming, and retention behave the same regardless of provider. Swap between Azure Blob Storage, Azure Data Lake, Amazon S3, Google Cloud Storage, OneDrive, Google Drive, Dropbox, CloudKit (iCloud app data), SFTP, and a local file system without rewriting application code — and optionally use the Virtual File System (VFS) overlay for a file/directory API on top of any configuredIStorage. Pair it with our ASP.NET controllers and SignalR client to deliver chunked uploads, ranged downloads, and progress notifications end to end.

Motivation

Cloud storage vendors expose distinct SDKs, option models, and authentication patterns. That makes it painful to change providers, run multi-region replication, or stand up hermetic tests. ManagedCode.Storage gives you a universal surface, consistentResult<T> handling, and DI-aware registration helpers so you can plug in any provider, test locally, and keep the same code paths in production.

Features

  • UnifiedIStorage abstraction covering upload, download, streaming, metadata, deletion, container management, and legal hold operations backed byResult<T> responses.
  • Provider coverage across Azure Blob Storage, Azure Data Lake, Amazon S3, Google Cloud Storage, OneDrive (Microsoft Graph), Google Drive, Dropbox, CloudKit (iCloud app data), SFTP, and the local file system.
  • Keyed dependency-injection registrations plus default provider helpers to fan out files per tenant, region, or workload without manual service plumbing.
  • ASP.NET storage controllers, chunk orchestration services, and a SignalR hub/client pair that deliver resumable uploads, ranged downloads, CRC32 validation, and real-time progress.
  • ManagedCode.Storage.Client brings streaming uploads/downloads, CRC32 helpers, and MIME discovery viaMimeHelper to any .NET app.
  • Strongly typed option objects (UploadOptions,DownloadOptions,DeleteOptions,MetadataOptions,LegalHoldOptions, etc.) let you configure directories, metadata, and legal holds in one place.
  • Virtual File System package provides a file/directory API (IVirtualFileSystem) on top of the configuredIStorage and can cache metadata for faster repeated operations.
  • Comprehensive automated test suite with cross-provider sync fixtures, multi-gigabyte streaming simulations (4 MB units per "GB"), ASP.NET controller harnesses, and SFTP/local filesystem coverage.
  • ManagedCode.Storage.TestFakes package plus Testcontainers-based fixtures make it easy to run offline or CI tests without touching real cloud accounts.

Packages

Core & Utilities

PackageLatestDescription
ManagedCode.Storage.CoreNuGetCore abstractions, option models, CRC32/MIME helpers, and DI extensions.
ManagedCode.Storage.VirtualFileSystemNuGetVirtual file system overlay on top ofIStorage (file/directory API + caching; not a provider).
ManagedCode.Storage.TestFakesNuGetProvider doubles for unit/integration tests without hitting cloud services.

Providers

PackageLatestDescription
ManagedCode.Storage.AzureNuGetAzure Blob Storage implementation with metadata, streaming, and legal hold support.
ManagedCode.Storage.Azure.DataLakeNuGetAzure Data Lake Gen2 provider on top of the unified abstraction.
ManagedCode.Storage.AwsNuGetAmazon S3 provider with Object Lock and legal hold operations.
ManagedCode.Storage.GcpNuGetGoogle Cloud Storage integration built on official SDKs.
ManagedCode.Storage.FileSystemNuGetLocal file system implementation for hybrid or on-premises workloads.
ManagedCode.Storage.SftpNuGetSFTP provider powered by SSH.NET for legacy and air-gapped environments.
ManagedCode.Storage.OneDriveNuGetOneDrive provider built on Microsoft Graph.
ManagedCode.Storage.GoogleDriveNuGetGoogle Drive provider built on the Google Drive API.
ManagedCode.Storage.DropboxNuGetDropbox provider built on the Dropbox API.
ManagedCode.Storage.CloudKitNuGetCloudKit (iCloud app data) provider built on CloudKit Web Services.

Configuring OneDrive, Google Drive, Dropbox, and CloudKit

iCloud Drive does not expose a public server-side file API.ManagedCode.Storage.CloudKit targets CloudKit Web Services (iCloud app data), not iCloud Drive.

Credential guide:docs/Development/credentials.md.

These providers follow the same DI patterns as the other backends: useAdd*StorageAsDefault(...) to bindIStorage, orAdd*Storage(...) to inject the provider interface (IOneDriveStorage,IGoogleDriveStorage,IDropboxStorage,ICloudKitStorage).

Most cloud-drive providers expect you to create the official SDK client (Graph/Drive/Dropbox) with your preferred auth flow and pass it into the storage options. ManagedCode.Storage does not run OAuth flows automatically.

Keyed registrations are available as well (useful for multi-tenant apps):

usingManagedCode.Storage.Core;usingManagedCode.Storage.Dropbox.Extensions;builder.Services.AddDropboxStorageAsDefault("tenant-a", options=>{options.AccessToken=configuration["Dropbox:AccessToken"];// obtained via OAuth (see Dropbox section below)options.RootPath="/apps/my-app";});vartenantStorage=app.Services.GetRequiredKeyedService<IStorage>("tenant-a");

OneDrive / Microsoft Graph

  1. Install the provider package and import DI extensions:

    dotnet add package ManagedCode.Storage.OneDrivedotnet add package Azure.Identity
    usingManagedCode.Storage.OneDrive.Extensions;

    Docs:Register an app,Microsoft Graph auth.

  2. Create an app registration in Azure Active Directory (Entra ID) and record theApplication (client) ID,Directory (tenant) ID, and aclient secret.

  3. InAPI permissions, add Microsoft Graph permissions:

    • For server-to-server apps:ApplicationFiles.ReadWrite.All (orSites.ReadWrite.All for SharePoint drives), thenGrant admin consent.
    • For user flows:Delegated permissions are also possible, but you must supply a Graph client that authenticates as the user.
  4. Create the Graph client (example uses client credentials):

    usingAzure.Identity;usingMicrosoft.Graph;vartenantId=configuration["OneDrive:TenantId"]!;varclientId=configuration["OneDrive:ClientId"]!;varclientSecret=configuration["OneDrive:ClientSecret"]!;varcredential=newClientSecretCredential(tenantId,clientId,clientSecret);vargraphClient=newGraphServiceClient(credential,new[]{"https://graph.microsoft.com/.default"});
  5. Register OneDrive storage with the Graph client and the drive/root you want to scope to:

    builder.Services.AddOneDriveStorageAsDefault(options=>{options.GraphClient=graphClient;options.DriveId="me";// or a specific drive IDoptions.RootPath="app-data";// folder will be created when CreateContainerIfNotExists is trueoptions.CreateContainerIfNotExists=true;});
  6. If you need a concrete drive id, fetch it via Graph (example):

    vardrive=awaitgraphClient.Me.Drive.GetAsync();vardriveId=drive?.Id;

Google Drive

  1. Install the provider package and import DI extensions:

    dotnet add package ManagedCode.Storage.GoogleDrive
    usingManagedCode.Storage.GoogleDrive.Extensions;

    Docs:Drive API overview,OAuth 2.0.

  2. InGoogle Cloud Console, create a project and enable theGoogle Drive API.

  3. Create credentials:

    • Service account (recommended for server apps): create a service account and download a JSON key.
    • OAuth client (interactive user auth): configure OAuth consent screen and create an OAuth client id/secret.
  4. Create aDriveService.

    Service account example:

    usingGoogle.Apis.Auth.OAuth2;usingGoogle.Apis.Drive.v3;usingGoogle.Apis.Services;varcredential=GoogleCredential.FromFile("service-account.json").CreateScoped(DriveService.Scope.Drive);vardriveService=newDriveService(newBaseClientService.Initializer{HttpClientInitializer=credential,ApplicationName="MyApp"});

    If you use a service account, share the target folder/drive with the service account email (or use a Shared Drive) so it can see your files.

  5. Register the Google Drive provider with the configuredDriveService and a root folder id:

    builder.Services.AddGoogleDriveStorageAsDefault(options=>{options.DriveService=driveService;options.RootFolderId="root";// or a specific folder id you control / shared team drive folder idoptions.CreateContainerIfNotExists=true;options.SupportsAllDrives=true;// To support shared/team drives});
  6. Store tokens in user secrets or environment variables; never commit them to source control.

Dropbox

  1. Install the provider package and import DI extensions:

    dotnet add package ManagedCode.Storage.Dropbox
    usingManagedCode.Storage.Dropbox.Extensions;

    Docs:Dropbox App Console,OAuth guide.

  2. Create an app in theDropbox App Console and chooseScoped access with theFull Dropbox orApp folder type.

  3. Record theApp key andApp secret (Settings tab).

  4. UnderPermissions, enablefiles.content.write,files.content.read,files.metadata.read, andfiles.metadata.write (plus any additional scopes you need) and save changes.

  5. Obtain an access token:

    • For quick local testing, you can generate a token in the app console.
    • For production, use OAuth code flow (example):
    usingDropbox.Api;varappKey=configuration["Dropbox:AppKey"]!;varappSecret=configuration["Dropbox:AppSecret"]!;varredirectUri=configuration["Dropbox:RedirectUri"]!;// must be registered in Dropbox app console// 1) Redirect user to:// var authorizeUri = DropboxOAuth2Helper.GetAuthorizeUri(OAuthResponseType.Code, appKey, redirectUri, tokenAccessType: TokenAccessType.Offline);//// 2) Receive the 'code' on your redirect endpoint, then exchange it:varauth=awaitDropboxOAuth2Helper.ProcessCodeFlowAsync(code,appKey,appSecret,redirectUri);varaccessToken=auth.AccessToken;varrefreshToken=auth.RefreshToken;// store securely if you requested offline access
  6. Register Dropbox storage with a root path (use/ for full access apps or/Apps/<your-app> for app folders). You can let the provider create the SDK client from credentials:

    builder.Services.AddDropboxStorageAsDefault(options=>{varaccessToken=configuration["Dropbox:AccessToken"]!;options.AccessToken=accessToken;options.RootPath="/apps/my-app";options.CreateContainerIfNotExists=true;});

    Or, for production, prefer refresh tokens (offline access):

    builder.Services.AddDropboxStorageAsDefault(options=>{options.RefreshToken=configuration["Dropbox:RefreshToken"]!;options.AppKey=configuration["Dropbox:AppKey"]!;options.AppSecret=configuration["Dropbox:AppSecret"];// optional when using PKCEoptions.RootPath="/apps/my-app";});
  7. Store tokens in user secrets or environment variables; never commit them to source control.

CloudKit (iCloud app data)

  1. Install the provider package and import DI extensions:

    dotnet add package ManagedCode.Storage.CloudKit
    usingManagedCode.Storage.CloudKit.Extensions;usingManagedCode.Storage.CloudKit.Options;

    Docs:CloudKit Web Services Reference.

  2. In Apple Developer / CloudKit Dashboard, configure the container you want to use and note its container id (example:iCloud.com.company.app).

    • ContainerId is an identifier (not a secret) and is typically derived from your App ID / bundle id.
  3. Ensure the file record type exists (defaultMCStorageFile).

  4. Add these fields to the record type:

    • path (String) — must be queryable/indexed for prefix listing.
    • contentType (String) — optional but recommended.
    • file (Asset) — stores the binary content.
  5. Configure authentication:

    • API token (ckAPIToken): create an API token for your container in CloudKit Dashboard and store it as a secret.
    • Server-to-server key (public DB only): create a CloudKit key in Apple Developer (download the.p8 private key, keep the key id).
  6. Register CloudKit storage:

    builder.Services.AddCloudKitStorageAsDefault(options=>{options.ContainerId="iCloud.com.company.app";// identifier, not a secretoptions.Environment=CloudKitEnvironment.Production;options.Database=CloudKitDatabase.Public;options.RootPath="app-data";// Choose ONE auth mode:options.ApiToken=configuration["CloudKit:ApiToken"];// OR:// options.ServerToServerKeyId = configuration["CloudKit:KeyId"];// options.ServerToServerPrivateKeyPem = configuration["CloudKit:PrivateKeyPem"]; // paste PEM (.p8) contents// Optional: provide a custom HttpClient (proxy, retries, test handler).// options.HttpClient = new HttpClient();});
  7. CloudKit Web Services impose size limits; keep files reasonably small and validate against your current CloudKit quotas.

ASP.NET & Clients

PackageLatestDescription
ManagedCode.Storage.ServerNuGetASP.NET controllers, chunk orchestration services, and the SignalR storage hub.
ManagedCode.Storage.ClientNuGet.NET client SDK for uploads, downloads, metadata, and SignalR negotiations.
ManagedCode.Storage.Client.SignalRNuGetSignalR streaming client for browsers and native applications.

Architecture

Storage Topology

The topology below shows how applications talk to the sharedIStorage surface, optional Virtual File System, and keyed provider factories before landing on the concrete backends.

flowchart LR    subgraph Applications        API["ASP.NET Controllers"]        SignalRClient["SignalR Client"]        Workers["Background Services"]    end    subgraph Abstraction        Core["IStorage Abstractions"]        VFS["Virtual File System"]        Factories["Keyed Provider Factories"]    end    subgraph Providers        Azure["Azure Blob"]        AzureDL["Azure Data Lake"]        Aws["Amazon S3"]        Gcp["Google Cloud Storage"]        OneDrive["OneDrive (Graph)"]        GoogleDrive["Google Drive"]        Dropbox["Dropbox"]        CloudKit["CloudKit (iCloud app data)"]        Fs["File System"]        Sftp["SFTP"]    end    Applications --> Core    Core --> VFS    Core --> Factories    Factories --> Azure    Factories --> AzureDL    Factories --> Aws    Factories --> Gcp    Factories --> OneDrive    Factories --> GoogleDrive    Factories --> Dropbox    Factories --> CloudKit    Factories --> Fs    Factories --> Sftp
Loading

Keyed provider registrations let you resolve multiple named instances from dependency injection while reusing the same abstraction across Azure, AWS, Google Cloud Storage, Google Drive, OneDrive, Dropbox, CloudKit, SFTP, and local file system storage.

ASP.NET Streaming Controllers

Controllers inManagedCode.Storage.Server expose minimal routes that stream directly between HTTP clients and blob providers. Uploads arrive as multipart forms or raw streams, flow through the unifiedIStorage abstraction, and land in whichever provider is registered. Downloads returnFileStreamResult responses so browsers, SDKs, or background jobs can read blobs without buffering the whole payload in memory.

sequenceDiagram    participant Client as Client App    participant Controller as StorageController    participant Storage as IStorage    participant Provider as IStorage Provider    Client->>Controller: POST /storage/upload (stream)    Controller->>Storage: UploadAsync(stream, UploadOptions)    Storage->>Provider: Push stream to backend    Provider-->>Storage: Result<BlobMetadata>    Storage-->>Controller: Upload response    Controller-->>Client: 200 OK + metadata    Client->>Controller: GET /storage/download?file=video.mp4    Controller->>Storage: DownloadAsync(file)    Storage->>Provider: Open download stream    Provider-->>Storage: Result<Stream>    Storage-->>Controller: Stream payload    Controller-->>Client: Chunked response
Loading

Controllers remain thin: consumers can inherit and override actions to add custom routing, authorization, or telemetry while leaving the streaming plumbing intact.

Virtual File System (VFS)

Want a file/directory API on top of any configuredIStorage (with optional metadata caching)? TheManagedCode.Storage.VirtualFileSystem package providesIVirtualFileSystem, which routes all operations through your registered storage provider.

usingManagedCode.Storage.FileSystem.Extensions;usingManagedCode.Storage.VirtualFileSystem.Core;usingManagedCode.Storage.VirtualFileSystem.Extensions;// 1) Register any IStorage provider (example: FileSystem)builder.Services.AddFileSystemStorageAsDefault(options=>{options.BaseFolder=Path.Combine(builder.Environment.ContentRootPath,"storage");});// 2) Add VFS overlaybuilder.Services.AddVirtualFileSystem(options=>{options.DefaultContainer="vfs";options.EnableCache=true;});// 3) Use IVirtualFileSystempublicsealedclassMyVfsService(IVirtualFileSystemvfs){publicasyncTaskWriteAsync(CancellationTokenct){varfile=awaitvfs.GetFileAsync("avatars/user-1.png",ct);awaitfile.WriteAllTextAsync("hello",cancellationToken:ct);}}

VFS is an overlay: it does not replace your provider. In tests, pair VFS withManagedCode.Storage.TestFakes or the FileSystem provider pointed at a temp folder to avoid real cloud accounts.

Dependency Injection & Keyed Registrations

Every provider ships with default and provider-specific registrations, but you can also assign multiple named instances using .NET's keyed services. This makes it easy to route traffic to different containers/buckets (e.g.azure-primary vs.azure-dr) or to fan out a file to several backends:

usingAmazon;usingAmazon.S3;usingManagedCode.MimeTypes;usingMicrosoft.Extensions.DependencyInjection;usingSystem.IO;usingSystem.Threading;usingSystem.Threading.Tasks;builder.Services.AddAzureStorage("azure-primary", options=>{options.ConnectionString=configuration["Storage:Azure:Primary:ConnectionString"]!;options.Container="assets";}).AddAzureStorage("azure-dr", options=>{options.ConnectionString=configuration["Storage:Azure:Dr:ConnectionString"]!;options.Container="assets-dr";}).AddAWSStorage("aws-backup", options=>{options.PublicKey=configuration["Storage:Aws:AccessKey"]!;options.SecretKey=configuration["Storage:Aws:SecretKey"]!;options.Bucket="assets-backup";options.OriginalOptions=newAmazonS3Config{RegionEndpoint=RegionEndpoint.USEast1};});publicsealedclassAssetReplicator{privatereadonlyIAzureStorage_primary;privatereadonlyIAzureStorage_disasterRecovery;privatereadonlyIAWSStorage_backup;publicAssetReplicator([FromKeyedServices("azure-primary")]IAzureStorageprimary,[FromKeyedServices("azure-dr")]IAzureStoragesecondary,[FromKeyedServices("aws-backup")]IAWSStoragebackup){_primary=primary;_disasterRecovery=secondary;_backup=backup;}publicasyncTaskMirrorAsync(Streamcontent,stringfileName,CancellationTokencancellationToken=default){varuploadOptions=newUploadOptions(fileName,mimeType:MimeHelper.GetMimeType(fileName));if(content.CanSeek){content.Position=0;await_primary.UploadAsync(content,uploadOptions,cancellationToken);content.Position=0;await_disasterRecovery.UploadAsync(content,uploadOptions,cancellationToken);content.Position=0;await_backup.UploadAsync(content,uploadOptions,cancellationToken);return;}awaitusingvarbufferFile=LocalFile.FromRandomNameWithExtension(fileName);awaitbufferFile.CopyFromStreamAsync(content,cancellationToken);await_primary.UploadAsync(bufferFile.FileInfo,uploadOptions,cancellationToken);await_disasterRecovery.UploadAsync(bufferFile.FileInfo,uploadOptions,cancellationToken);await_backup.UploadAsync(bufferFile.FileInfo,uploadOptions,cancellationToken);}}

Keyed services can also be resolved viaIServiceProvider.GetRequiredKeyedService<T>("key") when manual dispatching is required.

Want to double-check data fidelity after copying? Pair uploads withCrc32Helper:

vardownload=await_backup.DownloadAsync(fileName,cancellationToken);download.IsSuccess.ShouldBeTrue();awaitusingvarlocal=download.Value;varcrc=Crc32Helper.CalculateFileCrc(local.FilePath);logger.LogInformation("Backup CRC for {File} is {Crc}",fileName,crc);

The test suite includes end-to-end scenarios that mirror payloads between Azure, AWS, the local file system, and virtual file systems; multi-gigabyte flows execute by default across every provider using 4 MB units per "GB" to keep runs fast while still exercising streaming paths.

ASP.NET Controllers & Streaming

TheManagedCode.Storage.Server package surfaces upload/download controllers that pipe HTTP streams straight into the storage abstraction. Files can be sent as multipart forms or raw streams, while downloads returnFileStreamResult so large assets flow back to the caller without buffering in memory.

// Program.cs / Startup.csbuilder.Services.AddStorageServer(options=>{options.EnableRangeProcessing=true;// support range/seek operationsoptions.InMemoryUploadThresholdBytes=512*1024;// spill to disk after 512 KBoptions.InMemoryDownloadThresholdBytes=512*1024;// guard APIs that materialize bytes in memory});app.MapControllers();// exposes /api/storage/* endpoints by default

When you need custom routes, validation, or policies, inherit from the base controller and reuse the same streaming helpers:

[Route("api/files")]publicsealedclassFilesController:StorageControllerBase<IMyCustomStorage>{publicFilesController(IMyCustomStoragestorage,ChunkUploadServicechunks,StorageServerOptionsoptions):base(storage,chunks,options){}}// Upload a form file directly into storagepublicTask<IActionResult>Upload(IFormFilefile,CancellationTokenct)=>UploadFormFileAsync(file,ct);// Stream a blob to the client in real timepublicTask<IActionResult>Download(stringfileName,CancellationTokenct)=>DownloadAsStreamAsync(fileName,ct);

Need resumable uploads or live progress UI? CallAddStorageSignalR() to enable the optional hub and connect with theManagedCode.Storage.Client.SignalR package; otherwise, the controllers alone cover straight HTTP streaming scenarios.

Connection modes

Each provider supports two DI patterns:

  • Default mode: register a provider as the app-wideIStorage (you have one default storage).
  • Provider-specific mode: register the provider interface (IAzureStorage,IAWSStorage, etc.) and/or multiple storages via keyed services.

Cloud-drive providers (OneDrive, Google Drive, Dropbox) and CloudKit are configured inConfiguring OneDrive, Google Drive, Dropbox, and CloudKit; the same default/provider-specific rules apply.

Azure

Default mode connection:

// Startup.csservices.AddAzureStorageAsDefault(newAzureStorageOptions{Container="{YOUR_CONTAINER_NAME}",ConnectionString="{YOUR_CONNECTION_STRING}",});

Using in default mode:

// MyService.cspublicclassMyService{privatereadonlyIStorage_storage;publicMyService(IStoragestorage){_storage=storage;}}

Provider-specific mode connection:

// Startup.csservices.AddAzureStorage(newAzureStorageOptions{Container="{YOUR_CONTAINER_NAME}",ConnectionString="{YOUR_CONNECTION_STRING}",});

Using in provider-specific mode

// MyService.cspublicclassMyService{privatereadonlyIAzureStorage_azureStorage;publicMyService(IAzureStorageazureStorage){_azureStorage=azureStorage;}}

Need multiple Azure accounts or containers? Callservices.AddAzureStorage("azure-primary", ...) and decorate constructor parameters with[FromKeyedServices("azure-primary")].

Google Cloud (Click here to expand)

Google Cloud

Default mode connection:

// Startup.csservices.AddGCPStorageAsDefault(opt=>{opt.GoogleCredential=GoogleCredential.FromFile("{PATH_TO_YOUR_CREDENTIALS_FILE}.json");opt.BucketOptions=newBucketOptions(){ProjectId="{YOUR_API_PROJECT_ID}",Bucket="{YOUR_BUCKET_NAME}",};});

Using in default mode:

// MyService.cspublicclassMyService{privatereadonlyIStorage_storage;publicMyService(IStoragestorage){_storage=storage;}}

Provider-specific mode connection:

// Startup.csservices.AddGCPStorage(newGCPStorageOptions{BucketOptions=newBucketOptions(){ProjectId="{YOUR_API_PROJECT_ID}",Bucket="{YOUR_BUCKET_NAME}",}});

Using in provider-specific mode

// MyService.cspublicclassMyService{privatereadonlyIGCPStorage_gcpStorage;publicMyService(IGCPStoragegcpStorage){_gcpStorage=gcpStorage;}}

Need parallel GCS buckets? Register them withAddGCPStorage("gcp-secondary", ...) and inject via[FromKeyedServices("gcp-secondary")].

Amazon (Click here to expand)

Amazon

Default mode connection:

// Startup.cs// Tip for LocalStack: configure the client and set ServiceURL to the emulator endpoint.varawsConfig=newAmazonS3Config{RegionEndpoint=RegionEndpoint.EUWest1,ForcePathStyle=true,UseHttp=true,ServiceURL="http://localhost:4566"// LocalStack default endpoint};services.AddAWSStorageAsDefault(opt=>{opt.PublicKey="{YOUR_PUBLIC_KEY}";opt.SecretKey="{YOUR_SECRET_KEY}";opt.Bucket="{YOUR_BUCKET_NAME}";opt.OriginalOptions=awsConfig;});

Using in default mode:

// MyService.cspublicclassMyService{privatereadonlyIStorage_storage;publicMyService(IStoragestorage){_storage=storage;}}

Provider-specific mode connection:

// Startup.csservices.AddAWSStorage(newAWSStorageOptions{PublicKey="{YOUR_PUBLIC_KEY}",SecretKey="{YOUR_SECRET_KEY}",Bucket="{YOUR_BUCKET_NAME}",OriginalOptions=awsConfig});

Using in provider-specific mode

// MyService.cspublicclassMyService{privatereadonlyIAWSStorage_storage;publicMyService(IAWSStoragestorage){_storage=storage;}}

Need parallel S3 buckets? Register them withAddAWSStorage("aws-backup", ...) and inject via[FromKeyedServices("aws-backup")].

FileSystem (Click here to expand)

FileSystem

Default mode connection:

// Startup.csservices.AddFileSystemStorageAsDefault(opt=>{opt.BaseFolder=Path.Combine(Environment.CurrentDirectory,"{YOUR_BUCKET_NAME}");});

Using in default mode:

// MyService.cspublicclassMyService{privatereadonlyIStorage_storage;publicMyService(IStoragestorage){_storage=storage;}}

Provider-specific mode connection:

// Startup.csservices.AddFileSystemStorage(newFileSystemStorageOptions{BaseFolder=Path.Combine(Environment.CurrentDirectory,"{YOUR_BUCKET_NAME}"),});

Using in provider-specific mode

// MyService.cspublicclassMyService{privatereadonlyIFileSystemStorage_fileSystemStorage;publicMyService(IFileSystemStoragefileSystemStorage){_fileSystemStorage=fileSystemStorage;}}

Mirror to multiple folders? UseAddFileSystemStorage("archive", options => options.BaseFolder = ...) and resolve them via[FromKeyedServices("archive")].

How to use

We assume that below code snippets are placed in your service class with injected IStorage:

publicclassMyService{privatereadonlyIStorage_storage;publicMyService(IStoragestorage){_storage=storage;}}

Upload

await_storage.UploadAsync(newStream());await_storage.UploadAsync("some string content");await_storage.UploadAsync(newFileInfo("D:\\my_report.txt"));

Delete

await_storage.DeleteAsync("my_report.txt");

Download

varlocalFile=await_storage.DownloadAsync("my_report.txt");

Get metadata

await_storage.GetBlobMetadataAsync("my_report.txt");

Native client

If you need more flexibility, you can use native client for any IStorage<T>

_storage.StorageClient

Conclusion

In summary, Storage library provides a universal interface for accessing and manipulating data in different cloud blobstorage providers, plus ready-to-host ASP.NET controllers, SignalR streaming endpoints, keyed dependency injection, anda memory-backed VFS.It makes it easy to switch between providers or to use multiple providers simultaneously, without having to learn anduse multiple APIs, while staying in full control of routing, thresholds, and mirroring.We hope you find it useful in your own projects!

About

Storage library provides a universal interface for accessing and manipulating data in different cloud blob storage providers

Topics

Resources

License

Stars

Watchers

Forks

Contributors15


[8]ページ先頭

©2009-2025 Movatter.jp