This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Note
Access to this page requires authorization. You can trysigning in orchanging directories.
Access to this page requires authorization. You can trychanging directories.
This article shows how to upload a blob using theAzure Storage client library for JavaScript. You can upload data to a block blob from a file path, a stream, a buffer, or a text string. You can also upload blobs with index tags.
You can use any of the following methods to upload data to a block blob:
Each of these methods can be called using aBlockBlobClient object.
Note
The Azure Storage client libraries don't support concurrent writes to the same blob. If your app requires multiple processes writing to the same blob, you should implement a strategy for concurrency control to provide a predictable experience. To learn more about concurrency strategies, seeManage concurrency in Blob Storage.
The following example uploads a block blob from a local file path:
// containerClient: ContainerClient object// blobName: string, includes file extension if provided// localFilePath: fully qualified path and file nameasync function uploadBlobFromLocalPath(containerClient, blobName, localFilePath){ // Create blob client from container client const blockBlobClient = containerClient.getBlockBlobClient(blobName); await blockBlobClient.uploadFile(localFilePath);}The following example uploads a block blob by creating a readable stream and uploading the stream:
// containerClient: ContainerClient object// blobName: string, includes file extension if provided// readableStream: Readable stream, for example, a stream returned from fs.createReadStream()async function uploadBlobFromReadStream(containerClient, blobName, readableStream) { // Create blob client from container client const blockBlobClient = containerClient.getBlockBlobClient(blobName); // Upload data to block blob using a readable stream await blockBlobClient.uploadStream(readableStream);}The following example uploads a block blob from a Node.js buffer:
// containerClient: ContainerClient object// blobName: string, includes file extension if provided// buffer: blob contents as a buffer, for example, from fs.readFile()async function uploadBlobFromBuffer(containerClient, blobName, buffer) { // Create blob client from container client const blockBlobClient = containerClient.getBlockBlobClient(blobName); // Upload buffer await blockBlobClient.uploadData(buffer);}The following example uploads a block blob from a string:
// containerClient: ContainerClient object// blobName: string, includes file extension if provided// fileContentsAsString: blob contentasync function uploadBlobFromString(containerClient, blobName, fileContentsAsString){ // Create blob client from container client const blockBlobClient = containerClient.getBlockBlobClient(blobName); await blockBlobClient.upload(fileContentsAsString, fileContentsAsString.length);}You can define client library configuration options when uploading a blob. These options can be tuned to improve performance, enhance reliability, and optimize costs. The code examples in this section show how to set configuration options using theBlockBlobParallelUploadOptions interface, and how to pass those options as a parameter to an upload method call.
You can configure properties inBlockBlobParallelUploadOptions to improve performance for data transfer operations. The following table lists the properties you can configure, along with a description:
| Property | Description |
|---|---|
blockSize | The maximum block size to transfer for each request as part of an upload operation. |
concurrency | The maximum number of parallel requests that are issued at any given time as a part of a single parallel transfer. |
maxSingleShotSize | If the size of the data is less than or equal to this value, it's uploaded in a single put rather than broken up into chunks. If the data is uploaded in a single shot, the block size is ignored. Default value is 256 MiB. |
The following code example shows how to set values forBlockBlobParallelUploadOptions and include the options as part of an upload method call. The values provided in the samples aren't intended to be a recommendation. To properly tune these values, you need to consider the specific needs of your app.
// containerClient: ContainerClient object// blobName: string, includes file extension if provided// localFilePath: fully qualified path and file nameasync function uploadWithTransferOptions(containerClient, blobName, localFilePath) { // Specify data transfer options const uploadOptions = { blockSize: 4 * 1024 * 1024, // 4 MiB max block size concurrency: 2, // maximum number of parallel transfer workers maxSingleShotSize: 8 * 1024 * 1024, // 8 MiB initial transfer size } // Create blob client from container client const blockBlobClient = containerClient.getBlockBlobClient(blobName); // Upload blob with transfer options await blockBlobClient.uploadFile(localFilePath, uploadOptions);}To learn more about tuning data transfer options, seePerformance tuning for uploads and downloads with JavaScript.
Blob index tags categorize data in your storage account using key-value tag attributes. These tags are automatically indexed and exposed as a searchable multi-dimensional index to easily find data.
The following example uploads a block blob with index tags set usingBlockBlobParallelUploadOptions:
// containerClient: ContainerClient object// blobName: string, includes file extension if provided// localFilePath: fully qualified path and file nameasync function uploadWithIndexTags(containerClient, blobName, localFilePath) { // Specify index tags for blob const uploadOptions = { tags: { 'Sealed': 'false', 'Content': 'image', 'Date': '2022-07-18', } } // Create blob client from container client const blockBlobClient = containerClient.getBlockBlobClient(blobName); // Upload blob with index tags await blockBlobClient.uploadFile(localFilePath, uploadOptions);}You can set a blob's access tier on upload by using theBlockBlobParallelUploadOptions interface. The following code example shows how to set the access tier when uploading a blob:
// containerClient: ContainerClient object// blobName: string, includes file extension if provided// localFilePath: fully qualified path and file nameasync function uploadWithAccessTier(containerClient, blobName, localFilePath) { // Specify access tier const uploadOptions = { // 'Hot', 'Cool', 'Cold', or 'Archive' tier: 'Cool', } // Create blob client from container client const blockBlobClient = containerClient.getBlockBlobClient(blobName); // Upload blob to cool tier await blockBlobClient.uploadFile(localFilePath, uploadOptions);}Setting the access tier is only allowed for block blobs. You can set the access tier for a block blob toHot,Cool,Cold, orArchive. To set the access tier toCold, you must use a minimumclient library version of 12.13.0.
To learn more about access tiers, seeAccess tiers overview.
To learn more about uploading blobs using the Azure Blob Storage client library for JavaScript, see the following resources.
The Azure SDK for JavaScript contains libraries that build on top of the Azure REST API, allowing you to interact with REST API operations through familiar JavaScript paradigms. The client library methods for uploading blobs use the following REST API operations:
View code samples from this article (GitHub):
Was this page helpful?
Need help with this topic?
Want to try using Ask Learn to clarify or guide you through this topic?
Was this page helpful?
Want to try using Ask Learn to clarify or guide you through this topic?