- Notifications
You must be signed in to change notification settings - Fork19
Provides an abstraction layer for interacting with a storage; the storage can be local or in the cloud.
License
tweedegolf/storage-abstraction
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Provides an abstraction layer for interacting with a storage; the storage can be a local file system or a cloud storage service. Supported cloud storage services are:
- MinIO
- Azure Blob
- Backblaze B2
- Google Cloud
- Amazon S3
Also S3 compliant cloud services are supported. Tested S3 compatible services are:
- Backblaze S3
- CloudFlare R2
- Cubbit
The API only provides basic storage operations (seebelow) and therefor the API is cloud agnostic. This means that you can develop your application using local disk storage and then use for instance Google Cloud or Amazon S3 in your production environment without the need to change any code.
- How it works
- Instantiate a storage
- Adapters
- Adapter Introspect API
- Adapter API
- Storage API
- Adding an adapter
- Tests
- Example application
- Questions and requests
AStorage
instance is a thin wrapper around one of the available adapters. These adapters are available as separate packages on npm. This way your code base stays as slim as possible because you only have to add the adapter(s) that you need to your project.
Most adapters are wrappers around the cloud storage service specific service clients, e.g. the AWS SDK.
List of available adapters:
- Local file system
npm i @tweedegolf/sab-adapter-local
- Amazon S3 (and compatible)
npm i @tweedegolf/sab-adapter-amazon-s3
- Google cloud
npm i @tweedegolf/sab-adapter-google-cloud
- Backblaze B2
npm i @tweedegolf/sab-adapter-backblaze-b2
- Azure Blob
npm i @tweedegolf/sab-adapter-azure-blob
- MinIO
npm i @tweedegolf/sab-adapter-minio
When you create aStorage
instance it creates an instance of an adapter based on the configuration object or url that you provide. Then all API calls to theStorage
are forwarded to this adapter instance, below a code snippet of theStorage
class that shows howcreateBucket
is forwarded:
// member function of class StoragepublicasynccreateBucket(name:string):Promise<ResultObject>{returnthis.adapter.createBucket(name);}
The classStorage
implements the interfaceIAdapter
and this interface declares the complete API. Because all adapters have to implement this interface as well, either by extendingAbstractAdapter
or otherwise, all API calls onStorage
can be directly forwarded to the adapters.
The adapter subsequently creates an instance of the cloud storage specific service client and this instance handles the actual communication with the cloud service. For instance:
// Amazon S3 adapterprivateconst_client=newS3Client();// Azure Blob Storage adapterprivateconst_client=newBlobServiceClient();
Therefor, dependent on what definitions you use, this library could be seen as a wrapper or a shim.
consts=newStorage(config);
When you create a newStorage
instance theconfig
argument is used to instantiate the right adapter. You can provide theconfig
argument in 2 forms:
- using a configuration object (js:
typeof === "object"
ts:AdapterConfig
) - using a configuration URL (
typeof === "string"
)
Internally the configuration URL will be converted to a configuration object so any rule that applies to a configuration object also applies to configuration URLs.
The configuration must at least specify a type; the type is used to determine which adapter should be created. Note that the adapters are not included in the Storage Abstraction package so you have to add them to you project's package.json before you can use them.
The value of the type is one of the enum members ofStorageType
:
enumStorageType{LOCAL="local",GCS="gcs",S3="s3",B2="b2",AZURE="azure",MINIO="minio",}
The Storage instance is only interested in the type so it checks if the type is valid and then passes the rest of the configuration on to the adapter constructor. It is the responsibility of the adapter to perform further checks on the configuration. I.e. if all mandatory values are available such as credentials or an endpoint.
To enforce that the configuration object contains atype
key, it expects the configuration object to be of typeStorageAdapterConfig
interfaceAdapterConfig{bucketName?:string;[id:string]:any;// any service specific mandatory or optional key}interfaceStorageAdapterConfigextendsAdapterConfig{type:string;}
Besides the mandatory keytype
one or more keys may be mandatory or optional dependent on the type of adapter; for instance keys for passing credentials such askeyFilename
for Google Storage oraccessKeyId
andsecretAccessKey
for Amazon S3, and keys for further configuring the storage service such asStoragePipelineOptions
for Azure Blob.
The general format of configuration urls is:
constu="protocol://username:password@host:port/path/to/object?region=auto&option2=value2...";
For most storage servicesusername
andpassword
are the credentials, such as key id and secret but this is not mandatory; you may use these values for other purposes.
The protocol part of the url defines the type of storage:
local://
→ local storageminio://
→ MinIOb2://
→ Backblaze B2s3://
→ Amazon S3gcs://
→ Google Cloudazure://
→ Azure Blob Storage
These values match the values in the enumStorageType
shown above.
The url parser generates a generic object with generic keys that resembles the standard javascriptURL object. This object will be converted to the adapter specific AdapterConfig format in the constructor of the adapter. When converted thesearchParams
object will be flattened into the config object, for example:
// port and bucketconstu="s3://key:secret@the-buck/path/to/object?region=auto&option2=value2";// output parserconstp={protocol:"s3",username:"key",password:"secret",host:"the-buck",port:null,path:"path/to/object",searchParams:{region:"auto",option2:"value2",},};// AdapterConfigAmazonS3constc={type:"s3",accessKeyId:"key",secretAccessKey:"secret",bucketName:"the-buck",region:"auto",option2:"value2",};
The components of the url represent config parameters and because not all adapters require the same and/or the same number of parameters,not all components of the url are mandatory. When you leave certain components out, it may result in an invalid url according to theofficial specification but the parser will parse them anyway.
// port and bucketconstu="s3://part1:part2@bucket:9000/path/to/object?region=auto&option2=value2";constp={protocol:"s3",username:"part1",part2:"part2",host:"bucket",port:"9000",path:"path/to/object",searchParams:{region:"auto",option2:"value2"},};// no bucket but with @constu="s3://part1:part2@:9000/path/to/object?region=auto&option2=value2";constp={protocol:"s3",username:"part1",password:"part2",host:null,port:"9000",path:"path/to/object",searchParams:{region:"auto",option2:"value2"},};// no bucketconstu="s3://part1:part2:9000/path/to/object?region=auto&option2=value2";constp={protocol:"s3",username:"part1",password:"part2",host:null,port:"9000",path:"path/to/object",searchParams:{region:"auto",option2:"value2"},};// no credentials, note: @ is mandatory in order to be able to parse the bucket nameconstu="s3://@bucket/path/to/object?region=auto&option2=value2";constp={protocol:"s3",username:null,password:null,host:"bucket",port:null,path:"path/to/object",searchParams:{region:"auto",option2:"value2"},};// no credentials, no bucketconstu="s3:///path/to/object?region=auto&option2=value2";constp={protocol:"s3",username:"/path/to/object",password:null,host:null,port:null,path:null,searchParams:{region:"auto",option2:"value2"},};// no credentials, no bucket, no extra options (query string)constu="s3:///path/to/object";constp={protocol:"s3",username:"/path/to/object",password:null,host:null,port:null,path:null,searchParams:null,};// only protocolconstu="s3://";constp={protocol:"s3",username:null,password:null,host:null,port:null,path:null,searchParams:null,};// absolutely bareconstu="s3";constp={protocol:"s3",username:null,password:null,host:null,port:null,path:null,searchParams:null,};
If you provide a bucket name it will be stored in the state of the Storage instance. This makes it for instance possible to add a file to a bucket without specifying the name of bucket:
storage.addFile("path/to/your/file");// the file was automatically added to the selected bucket
Note that if the bucket does not exist it will not be created automatically for you when you create a Storage instance! This was the case in earlier versions but as of version 2.0.0 you have to create the bucket yourself usingcreateBucket
.
The adapters are the key part of this library; where theStorage
is merely a thin wrapper, adapters perform the actual actions on the cloud storage by translating generic API methods calls to storage specific calls. The adapters are not part of the Storage Abstraction package; you need to install the separately. SeeHow it works.
A description of the available adapters; what the configuration objects and URLs look like and what the default values are can be found in the README of the adapter packages:
type | npm command | readme |
---|---|---|
Local storage | npm i @tweedegolf/sab-adapter-local | npm.com↗ |
Amazon S3 | npm i @tweedegolf/sab-adapter-amazon-s3 | npm.com↗ |
Google Cloud | npm i @tweedegolf/sab-adapter-google-cloud | npm.com↗ |
Azure Blob | npm i @tweedegolf/sab-adapter-azure-blob | npm.com↗ |
MinIO | npm i @tweedegolf/sab-adapter-minio | npm.com↗ |
Backblaze B2 | npm i @tweedegolf/sab-adapter-backblaze-b2 | npm.com↗ |
You can also add more adapters yourself very easily, seebelow.
These methods can be used to introspect the adapter. Unlike all other methods, these methods do not return a promise but return a value immediately.
getType(): string;
Returns the type of storage, value is one of the enumStorageType
.
Also implemented as getter:
conststorage=newStorage(config);console.log(storage.type);
getSelectedBucket():null|string
Returns the name of the bucket that you've provided with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
Also implemented as getter:
conststorage=newStorage(config);console.log(storage.bucketName);
setSelectedBucket(null|string):void
Sets the name of the bucket that will be stored in the local state of the Adapter instance. This overrides the value that you may have provided with the config upon instantiation. You can also clear this value by passingnull
as argument.
If you use this method to select a bucket you don't have to provide a bucket name when you call any of these methods:
clearBucket
deleteBucket
bucketExists
addFile
,addFileFromStream
,addFileFromBuffer
,addFileFromPath
getFileAsURL
,getFileAsStream
fileExists
removeFile
listFiles
sizeof
Also implemented as setter:
conststorage=newStorage(config);storage.bucketName="the-buck-2";
getConfiguration():AdapterConfig
Returns the typed configuration object as provided when the storage was instantiated. If you have provided the configuration in url form, the function will return it as an configuration object.
Also implemented as getter:
conststorage=newStorage(config);console.log(storage.config);
getConfigurationError(): string|null
Returns an error message if something has gone wrong with initialization or authorization. Returnsnull
otherwise.
Also implemented as getter:
conststorage=newStorage(config);console.log(storage.configError);
getServiceClient():any
Under the hood some adapters create an instance of a service client that actually makes connection with the cloud storage. If that is the case, this method returns the instance of that service client.
For instance in the adapter for Amazon S3 an instance of the S3Client of the aws sdk v3 is instantiated; this instance will be returned if you callgetServiceClient
on a storage instance with an S3 adapter.
// inside the Amazon S3 adapter an instance of the S3Client is created. S3Client is part of the aws-sdkthis._client=newS3Client();
This method is particularly handy if you need to make API calls that are not implemented in this library. The example below shows how theCopyObjectCommand
is used directly on the service client. The API of the Storage Abstraction does not (yet) offer a method to copy an object that is already stored in the cloud service so this can be a way to circumvent that.
conststorage=newStorage(config);constclient=storage.getServiceClient();constinput={Bucket:"destinationbucket",CopySource:"/sourcebucket/HappyFacejpg",Key:"HappyFaceCopyjpg",};constcommand=newCopyObjectCommand(input);constresponse=awaitclient.send(command);
Also implemented as getter:
conststorage=newStorage(config);console.log(storage.serviceClient);
These methods are actually accessing the underlying cloud storage service. All these methods are async and return a promise that always resolves in aResponseObject
type or a variant thereof:
exportinterfaceResultObject{value:string|null;error:string|null;}
If the call succeeds theerror
key will benull
and thevalue
key will hold the returned value. This can be a simple string"ok"
or for instance an array of bucket names
In case the call yields an error, thevalue
key will benull
and theerror
key will hold the error message. Usually this is the error message as sent by the cloud storage service so if necessary you can lookup the error message in the documentation of that service to learn more about the error.
listBuckets():Promise<ResultObjectBuckets>
return type:
exporttypeResultObjectBuckets={value:Array<string>|null;error:string|null;};
Returns an array with the names of all buckets in the storage.
Note: dependent on the type of storage and the credentials used, you may need extra access rights for this action. E.g.: sometimes a user may only access the contents of one single bucket.
listFiles(bucketName?: string):Promise<ResultObjectFiles>;
return type:
exporttypeResultObjectFiles={error:string|null;value:Array<[string,number]>|null;};
Returns a list of all files in the bucket; for each file a tuple is returned: the first value is the path and the second value is the size of the file. If the call succeeds thevalue
key will hold an array of tuples.
ThebucketName
arg is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
bucketExists(bucketName?: string):Promise<ResultObjectBoolean>;
return type:
exporttypeResultObjectBoolean={error:string|null;value:boolean|null;};
Check whether a bucket exists or not. If the call succeeds thevalue
key will hold a boolean value.
ThebucketName
arg is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
fileExists(bucketName?: string,fileName: string):Promise<ResultObjectBoolean>;
return type:
exporttypeResultObjectBoolean={error:string|null;value:boolean|null;};
Check whether a file exists or not. If the call succeeds thevalue
key will hold a boolean value.
ThebucketName
arg is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
createBucket(bucketName: string,options?: object):Promise<ResultObject>;
return type:
exportinterfaceResultObject{value:string|null;error:string|null;}
Creates a new bucket. You can provide extra storage-specific settings such as access rights using theoptions
object.
If the bucket was created successfully thevalue
key will hold the string "ok".
If the bucket exists or if creating the bucket fails for another reason theerror
key will hold the error message.
Note: dependent on the type of storage and the credentials used, you may need extra access rights for this action. E.g.: sometimes a user may only access the contents of one single bucket and has no rights to create a new bucket.
clearBucket(bucketName?: string):Promise<ResultObject>;
return type:
exportinterfaceResultObject{value:string|null;error:string|null;}
Removes all files in the bucket. If the call succeeds thevalue
key will hold the string "ok".
ThebucketName
arg is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
Note: dependent on the type of storage and the credentials used, you may need extra access rights for this action.
deleteBucket(bucketName?: string):Promise<ResultObject>;
return type:
exportinterfaceResultObject{value:string|null;error:string|null;}
Deletes the bucket and all files in it. If the call succeeds thevalue
key will hold the string "ok".
ThebucketName
arg is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
Note: dependent on the type of storage and the credentials used, you may need extra access rights for this action.
addFile(params:FilePathParams|FileStreamParams|FileBufferParams):Promise<ResultObject>;
A generic method that is called under the hood when you calladdFileFromPath
,addFileFromStream
oraddFileFromBuffer
. It adds a file to a bucket and accepts the file in 3 different ways; as a path, a stream or a buffer, dependent on the type ofparams
.
There is no difference between using this method or one of the 3 specific methods. For details about theparams
object and the return value see the documentation below.
addFileFromPath(params:FilePathParams):Promise<ResultObject>;
param type:
exporttypeFilePathParams={bucketName?:string;origPath:string;targetPath:string;options?:{[id:string]:any;};};
return type:
exportinterfaceResultObject{value:string|null;error:string|null;}
Copies a file from a local pathorigPath
to the provided pathtargetPath
in the storage. The value fortargetPath
needs to include at least a file name. You can provide extra storage-specific settings such as access rights using theoptions
object.
The keybucketName
is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will hold"no bucket selected"
.
If the call is successfulvalue
will hold the public url to the file (if the bucket is publicly accessible and the authorized user has sufficient rights).
addFileFromBuffer(params:FileBufferParams):Promise<ResultObject>;
param type:
exporttypeFileBufferParams={bucketName?:string;buffer:Buffer;targetPath:string;options?:{[id:string]:any;};};
return type:
exportinterfaceResultObject{value:string|null;error:string|null;}
Copies a buffer to a file in the storage. The value fortargetPath
needs to include at least a file name. You can provide extra storage-specific settings such as access rights using theoptions
object.
The keybucketName
is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will hold"no bucket selected"
.
If the call is successfulvalue
will hold the public url to the file (if the bucket is publicly accessible and the authorized user has sufficient rights).
This method is particularly handy when you want to move uploaded files directly to the storage, for instance when you use Express.Multer withMemoryStorage.
addFileFromStream(params:FileStreamParams):Promise<ResultObject>;
param type:
exporttypeFileStreamParams={bucketName?:string;stream:Readable;targetPath:string;options?:{[id:string]:any;};};
return type:
exportinterfaceResultObject{value:string|null;error:string|null;}
Allows you to stream a file directly to the storage. The value fortargetPath
needs to include at least a file name. You can provide extra storage-specific settings such as access rights using theoptions
object.
The keybucketName
is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
If the call is successfulvalue
will hold the public url to the file (if the bucket is publicly accessible and the authorized user has sufficient rights).
This method is particularly handy when you want to store files while they are being processed; for instance if a user has uploaded a full-size image and you want to store resized versions of this image in the storage; you can pipe the output stream of the resizing process directly to the storage.
getFileAsURL(bucketName?: string,fileName: string,options?:Options):Promise<ResultObjectStream>;
param type:
exportOptions{[id:string]: any;// eslint-disable-line}
return type:
exporttypeResultObject={value:string|null;error:string|null;};
Returns the public url of the file (if the bucket is publicly accessible and the authorized user has sufficient rights).
ThebucketName
arg is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
If you want a signed url to the file you can pass add a keyuseSignedUrl
to the options object:
constsignedUrl=getFileAsURL("bucketName","fileName",{useSignedUrl:true});
Note that this doesn't work for the backblaze and the local adapter.
For the local adapter you can use the keywithoutDirectory
:
consts=newStorage({type:StorageType.LOCAL,directory:"./your_working_dir/sub_dir",bucketName:"bucketName",});consturl1=getFileAsURL("bucketName","fileName.jpg");// your_working_dir/sub_dir/bucketName/fileName.jpgconsturl2=getFileAsURL("bucketName","fileName.jpg",{withoutDirectory:true});// bucketName/fileName.jpg
getFileAsStream(bucketName?: string,fileName: string,options?:StreamOptions):Promise<ResultObjectStream>;
param type:
exportinterfaceStreamOptionsextendsOptions{start?:number;end?:number;}
return type:
exporttypeResultObjectStream={value:Readable|null;error:string|null;};
Returns a file in the storage as a readable stream. You can pass in extra options. If you use the keysstart
and/orend
only the bytes betweenstart
andend
of the file will be returned.
ThebucketName
arg is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
Some examples:
getFileAsReadable("bucket-name","image.png");// → reads whole filegetFileAsReadable("bucket-name","image.png",{});// → reads whole filegetFileAsReadable("bucket-name","image.png",{start:0});// → reads whole filegetFileAsReadable("bucket-name","image.png",{start:0,end:1999});// → reads first 2000 bytesgetFileAsReadable("bucket-name","image.png",{end:1999});// → reads first 2000 bytesgetFileAsReadable("bucket-name","image.png",{start:2000});// → reads file from byte 2000
removeFile(bucketName?: string,fileName: string,allVersions:boolean=false):Promise<ResultObject>;
return type:
exportinterfaceResultObject{error:string|null;value:string|null;}
Removes a file from the bucket. Does not fail if the file doesn't exist.
ThebucketName
arg is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
If the call succeeds thevalue
key will hold the string "ok".
sizeOf(bucketName?: string,fileName: string):Promise<ResultObjectNumber>;
return type:
exporttypeResultObjectNumber={error:string|null;value:number|null;};
Returns the size of a file.
ThebucketName
arg is optional; if you don't pass a value the selected bucket will be used. The selected bucket is the bucket that you've passed with the config upon instantiation or that you've set afterwards usingsetSelectedBucket
. If no bucket is selected the value of theerror
key in the result object will set to"no bucket selected"
.
If the call succeeds thevalue
key will hold the size of the file.
The Storage class has two extra method besides all methods of theIAdapter
interface.
getAdapter():IAdapter;// also implemented as getterconsts=newStorage({type:StorageType.S3})consta=s.adapter;
Returns the instance of the Adapter class that this Storage instance is currently using to access a storage service.
switchAdapter(config:string|AdapterConfig):void;
This method is used to instantiate the right adapter when you create a Storage instance. The method can also be used to switch to another adapter in an existing Storage instance at runtime.
The config parameter is the same type of object or URL that you use to instantiate a Storage. This method can be handy if your application needs a view on multiple storages.
If your application needs to copy over files from one storage service to another, say for instance from Google Cloud to Amazon S3, then it is more convenient to create 2 separate Storage instances:
import{Storage}from"@tweedegolf/storage-abstraction"consts1=newStorage({type:"s3"});consts2=newStorage({type:"gcs"});s2.addFile({bucketName:"bucketOnGoogleCloud"stream:s1.getFileAsStream("bucketOnAmazon","some-image.png"),targetPath:"copy-of-some-image.png",})
It is relatively easy to add an adapter for an unsupported cloud service. Note however that many cloud storage services are compatible with Amazon S3 so if that is the case, please check first if the Amazon S3 adapter does the job; it might work right away. However, sometimes even if a storage service is S3 compatible you have to write a separate adapter. For instance: although MinIO is S3 compliant it was necessary to write a separate adapter for MinIO.
If you want to add an adapter you can choose to make your adapter a class or a function; so if you don't like OOP you can implement your adapter using FP or any other coding style or programming paradigm you like.
Your adapter might have additional dependencies such as a service client library, like for instance the aws-sdk as is used in the Amazon S3 adapter. Add these dependencies to the package.json file in the./publish/YourAdapter
folder.
You may want to add your Adapter code to this package, in that case please add your dependencies to the package.json file in the root folder of the Storage Abstraction package as well. Your dependencies will not be added to the Storage Abstraction package when published to npm because only the files in the publish folder are published and there is a stripped version of the package.json file in the./publish/Storage
folder.
You may also want to add some tests for your adapter and it would be very much appreciated if you could publish your adapter to npm and add your adapter to this README, seethis table.
Follow these steps:
- Add a new type to the
StorageType
enum in./src/types/general.ts
- Define a configuration object (and a configuration url if you like)
- Write your adapter, make sure it implements all API methods
- Register your adapter in
./src/adapters.ts
- Publish your adapter on npm.
- You may also want to add the newly supported cloud storage service to the keywords array in the package.json file of the Storage Abstraction storage (note: there 2 package.json file for this package, one in the root folder and another in the publish folder)
You should add the name of the your type to the enumStorageType
in./src/types/general.ts
. It is not mandatory but may be very handy.
// add your type to the enumenumStorageType{LOCAL="local",GCS="gcs",// Google Cloud StorageS3="s3",// Amazon S3B2="b2",// BackBlaze B2AZURE="azure",// Microsoft Azure BlobMINIO="minio",YOUR_TYPE="yourtype",}
A configuration object type should at least contain a keytype
. To enforce this the Storage class expects the config object to be of typeStorageAdapterConfig
:
exportinterfaceAdapterConfig{bucketName?:string;[id:string]:any;// eslint-disable-line}exportinterfaceStorageAdapterConfigextendsAdapterConfig{type:string;}
For your custom configuration object you can either choose to extendStorageAdapterConfig
orAdapterConfig
. If you choose the latter you can use your adapter standalone without having to provide a redundant keytype
, which is why the configuration object of all existing adapters extendAdapterConfig
.
exportinterfaceYourAdapterConfigextendsAdapterConfig{additionalKey:string, ...}consts=newStorage({type:StorageType.YOUR_TYPE,// mandatory for Storagekey1:string,// other mandatory or optional key that your adapter need for instantiationkey2:string,})// works!consta=newYourAdapter({key1:string,key2:string,})// works because type is not mandatory
Also your configuration URL should at least contain the type. The name of the type is used for the protocol part of the URL. Upon instantiation the Storage class checks if a protocol is present on the provided URL.
example:
// your configuration URLconstu="yourtype://user:pass@bucket_name?option1=value1&...";
You can format the configuration URL completely as you like as long as your adapter has an appropriate function to parse it into the configuration object that your adapter expects. If your url follows the standard URL format you don't need to write a parse function, you can import theparseUrl
function from./src/util.ts
.
For more information about configuration URLs please readthis section
It is recommended that your adapter class extendsAbstractStorage
. If you look at thecode you can see that it implements the complete introspective API.getServiceClient
returns anany
value andgetConfig
returns a genericAdapterConfig
object; you may want to override these methods to make them return your adapter specific types.
Note that all API methods that have and optionalbucketName
arg are implemented as overloaded methods:
clearBucket
deleteBucket
bucketExists
getFileAsURL
getFileAsStream
fileExists
removeFile
listFiles
sizeof
The implementation of these methods in the AbstractAdapter handles the overloading part and performs some general checks that apply to all adapters. Then they call the cloud specific protected 'tandem' function that handles the adapter specific logic. The tandem function has the same name with an underscore prefix.
For instance: the implementation ofclearBucket
in AbstractAdapter checks for abucketName
arg and if it is not provided it looks if there is a selected bucket set. It also checks for configuration errors. Then it calls_clearBucket
which should be implemented in your adapter code to handle your cloud storage specific logic. This saves you a lot of hassle and code in your adapter module.
One other thing to note is the wayaddFileFromPath
,addFileFromBuffer
andaddFileFromReadable
are implemented; these are all forwarded to the API functionaddFile
. This function stores files in the storage using 3 different types of origin; a path, a buffer and a stream. Because these ways of storing have a lot in common they are grouped together in a single method.
If you look ataddFile
you see that just like the overloaded methods mentioned above, the implementation handles some generic logic and then calls_addFile
in your adapter code.
The abstract stub methods need to be implemented and the otherIAdapter
methods can be overridden in the your adapter class if necessary. Note that your adapter should not implement the methodsgetAdapter
andswitchAdapter
; these are part of the Storage API.
You don't necessarily have to extendAbstractAdapter
but if you choose not to your class should implement theIAdapter
interface. You'll find some configuration parse functions in the separate file./src/util.ts
so you can easily import these in your own class if these are useful for you.
You can use thistemplate as a starting point for your adapter. The template contains a lot of additional documentation per method.
The only requirement for this type of adapter is that your module exports a functioncreateAdapter
that takes a configuration object or URL as parameter and returns an object that has the shape or type of the interfaceIAdapter
.
You may want to check if you can use some of the utility functions defined in./src/util.js
. Also there is atemplate file that you can use as a starting point for your module.
TheswitchAdapter
method of Storage parses the type from the configuration and then creates the appropriate adapter instance. This is done by a lookup table that maps a storage type to a tuple that contains the name of the adapter and the path to the adapter module:
exportconstadapterClasses={s3:["AdapterAmazonS3","@tweedegolf/sab-adapter-amazon-s3"],your_type:["AdapterYourService","@you/sab-adapter-your-service"], ...};
IfswitchAdapter
fails to find the module at the specified path it tries to find it in the source folder by looking for a file that has the same name as your adapter, so in the example above it looks for./src/AdapterYourService.ts
.
Once the module is found it will be loaded at runtime usingrequire()
. An error will be thrown the type is not declared or if the module can not be found.
The lookup table is defined in./src/adapters.ts
.
You can create your own adapter in a separate repository and publish it from there to npm. You may also want to add your adapter code to this package, to do this follow these steps:
- Place the adapter in the
./src
folder - Create a file that contains all your types in the
./src/types
folder - Create an index file in the
./src/indexes
folder - Create a folder with the same name as your adapter in the
./publish
folder - Add a package.json and a README.md file to this folder
- Add your adapter to the
copy.ts
file in the root folder
If you want to run the tests you have to checkout the repository from github and install all dependencies withnpm install
oryarn install
. There are tests for all storage types; note that you may need to add your credentials to a.env
file, see the file.env.default
for more explanation, or provide credentials in another way. Also it should be noted that some of these tests require that the credentials allow to create, delete and list buckets.
You can run the Jasmine tests per storage type using one of the following commands:
# test local disknpm run test-local# ornpm run test-jasmine 0# test Amazon S3npm run test-s3# ornpm run test-jasmine 1# test Backblaze B2npm run test-b2# ornpm run test-jasmine 2# test Google Cloud Storagenpm run test-gcs# ornpm run test-jasmine 3# test Azure Blob Storagenpm run test-azure# ornpm run test-jasmine 4# test MinIOnpm run test-minio# ornpm run test-jasmine 5# test Cubbitnpm run test-jasmine 6# test Cloudflare R2npm run test-jasmine 7# test Backblaze B2 S3 APInpm run test-jasmine 8
As you can see in the filepackage.json
, the command sets thetype
environment variable which is read by Jasmine.
To run all Jasmine tests consecutively:
npm run test-all
You can find some additional non-Jasmine tests in the filetests/test.ts
. First select which type of storage you want to test, then uncomment the API calls you want to test, and finally run:
npm test
NOTE: not yet updated to API 2.0!
A simple application that shows how you can use the storage abstraction package can be found inthis repository. It uses and Ts.ED and TypeORM and it consists of both a backend and a frontend.
Please let us know if you have any questions and/or request by creating anissue.
About
Provides an abstraction layer for interacting with a storage; the storage can be local or in the cloud.
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors11
Uh oh!
There was an error while loading.Please reload this page.