This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Note
Access to this page requires authorization. You can trysigning in orchanging directories.
Access to this page requires authorization. You can trychanging directories.
This guide is an introduction to developing Azure Functions using JavaScript or TypeScript. The article assumes that you have already read theAzure Functions developer guide.
Important
The content of this article changes based on your choice of the Node.js programming model in the selector at the top of this page. The version you choose should match the version of the@azure/functions npm package you're using in your app. If you don't have that package listed in yourpackage.json, the default is v3. Learn more about the differences between v3 and v4 in themigration guide.
As a Node.js developer, you might also be interested in one of the following articles:
| Getting started | Concepts | Guided learning |
|---|---|---|
@azure/functions npm package. It's versioned independently of theruntime. Both the runtime and the programming model use the number 4 as their latest major version, but that's a coincidence.The following table shows each version of the Node.js programming model along with its supported versions of the Azure Functions runtime and Node.js.
| Programming Model Version | Support Level | Functions Runtime Version | Node.js Version | Description |
|---|---|---|---|---|
| 4.x | GA | 4.25+ | 22.x 20.x, 18.x | Supports a flexible file structure and code-centric approach to triggers and bindings. |
| 3.x | GA | 4.x | 20.x, 18.x, 16.x, 14.x | Requires a specific file structure with your triggers and bindings declared in a "function.json" file |
| 2.x | n/a | 3.x | 14.x, 12.x, 10.x | Reached end of support on December 13, 2022. SeeFunctions Versions for more info. |
| 1.x | n/a | 2.x | 10.x, 8.x | Reached end of support on December 13, 2022. SeeFunctions Versions for more info. |
The required folder structure for a JavaScript project looks like the following example:
<project_root>/ | - .vscode/ | - node_modules/ | - myFirstFunction/ | | - index.js | | - function.json | - mySecondFunction/ | | - index.js | | - function.json | - .funcignore | - host.json | - local.settings.json | - package.jsonThe main project folder,<project_root>, can contain the following files:
The recommended folder structure for a JavaScript project looks like the following example:
<project_root>/ | - .vscode/ | - node_modules/ | - src/ | | - functions/ | | | - myFirstFunction.js | | | - mySecondFunction.js | - test/ | | - functions/ | | | - myFirstFunction.test.js | | | - mySecondFunction.test.js | - .funcignore | - host.json | - local.settings.json | - package.jsonThe main project folder,<project_root>, can contain the following files:
The v3 model registers a function based on the existence of two files. First, you need afunction.json file located in a folder one level down from the root of your app. Second, you need a JavaScript file thatexports your function. By default, the model looks for anindex.js file in the same folder as yourfunction.json. If you're using TypeScript, you must use thescriptFile property infunction.json to point to the compiled JavaScript file. To customize the file location or export name of your function, seeconfiguring your function's entry point.
The function you export should always be declared as anasync function in the v3 model. You can export a synchronous function, but then you must callcontext.done() to signal that your function is completed, which is deprecated and not recommended.
Your function is passed aninvocationcontext as the first argument and yourinputs as the remaining arguments.
The following example is a simple function that logs that it was triggered and responds withHello, world!:
{ "bindings": [ { "type": "httpTrigger", "direction": "in", "name": "req", "authLevel": "anonymous", "methods": ["get", "post"] }, { "type": "http", "direction": "out", "name": "res" } ]}module.exports = async function (context, request) { context.log("Http function was triggered."); context.res = { body: "Hello, world!" };};The programming model loads your functions based on themain field in yourpackage.json. You can set themain field to a single file or multiple files by using aglob pattern. The following table shows example values for themain field:
| Example | Description |
|---|---|
src/index.js | Register functions from a single root file. |
src/functions/*.js | Register each function from its own file. |
src/{index.js,functions/*.js} | A combination where you register each function from its own file, but you still have a root file for general app-level code. |
In order to register a function, you must import theapp object from the@azure/functions npm module and call the method specific to your trigger type. The first argument when registering a function is the function name. The second argument is anoptions object specifying configuration for your trigger, your handler, and any other inputs or outputs. In some cases where trigger configuration isn't necessary, you can pass the handler directly as the second argument instead of anoptions object.
Registering a function can be done from any file in your project, as long as that file is loaded (directly or indirectly) based on themain field in yourpackage.json file. The function should be registered at a global scope because you can't register functions once executions start.
The following example is a simple function that logs that it was triggered and responds withHello, world!:
const { app } = require("@azure/functions");app.http("helloWorld1", { methods: ["POST", "GET"], handler: async (request, context) => { context.log("Http function was triggered."); return { body: "Hello, world!" }; },});Your function is required to have exactly one primary input called the trigger. It might also have secondary inputs and/or outputs. Inputs and outputs are configured in yourfunction.json files and are also referred to asbindings.
Inputs are bindings withdirection set toin. The main difference between a trigger and a secondary input is that thetype for a trigger ends inTrigger, for example typeblobTrigger vs typeblob. Most functions only use a trigger, and not many secondary input types are supported.
Inputs can be accessed in several ways:
[Recommended] As arguments passed to your function: Use the arguments in the same order that they're defined infunction.json. Thename property defined infunction.json doesn't need to match the name of your argument, although we recommend it for the sake of organization.
module.exports = async function (context, myTrigger, myInput, myOtherInput) { ... };As properties ofcontext.bindings: Use the key matching thename property defined infunction.json.
module.exports = async function (context) { context.log("This is myTrigger: " + context.bindings.myTrigger); context.log("This is myInput: " + context.bindings.myInput); context.log("This is myOtherInput: " + context.bindings.myOtherInput);};Outputs are bindings withdirection set toout and can be set in several ways:
[Recommended for single output] Return the value directly: If you're using an async function, you can return the value directly. You must change thename property of the output binding to$return infunction.json like in the following example:
{ "name": "$return", "type": "http", "direction": "out"}module.exports = async function (context, request) { return { body: "Hello, world!", };};[Recommended for multiple outputs] Return an object containing all outputs: If you're using an async function, you can return an object with a property matching the name of each binding in yourfunction.json. The following example uses output bindings named "httpResponse" and "queueOutput":
{ "name": "httpResponse", "type": "http", "direction": "out"},{ "name": "queueOutput", "type": "queue", "direction": "out", "queueName": "helloworldqueue", "connection": "storage_APPSETTING"}module.exports = async function (context, request) { let message = "Hello, world!"; return { httpResponse: { body: message, }, queueOutput: message, };};Set values oncontext.bindings: If you're not using an async function or you don't want to use the previous options, you can set values directly oncontext.bindings, where the key matches the name of the binding. The following example uses output bindings named "httpResponse" and "queueOutput":
{ "name": "httpResponse", "type": "http", "direction": "out"},{ "name": "queueOutput", "type": "queue", "direction": "out", "queueName": "helloworldqueue", "connection": "storage_APPSETTING"}module.exports = async function (context, request) { let message = "Hello, world!"; context.bindings.httpResponse = { body: message, }; context.bindings.queueOutput = message;};You can use thedataType property on an input binding to change the type of your input. However, the approach has some limitations:
string andbinary are supported (stream isn't)dataType property is ignored. Instead, use properties on therequest object to get the body in your desired format. For more information, seeHTTP request.In the following example of astorage queue trigger, the default type ofmyQueueItem is astring, but if you setdataType tobinary, the type changes to a Node.jsBuffer.
{ "name": "myQueueItem", "type": "queueTrigger", "direction": "in", "queueName": "helloworldqueue", "connection": "storage_APPSETTING", "dataType": "binary"}const { Buffer } = require("node:buffer");module.exports = async function (context, myQueueItem) { if (typeof myQueueItem === "string") { context.log("myQueueItem is a string"); } else if (Buffer.isBuffer(myQueueItem)) { context.log("myQueueItem is a buffer"); }};Your function is required to have exactly one primary input called the trigger. It might also have secondary inputs, a primary output called the return output, and/or secondary outputs. Inputs and outputs are also referred to asbindings outside the context of the Node.js programming model. Before v4 of the model, these bindings were configured infunction.json files.
The trigger is the only required input or output. For most trigger types, you register a function by using a method on theapp object named after the trigger type. You can specify configuration specific to the trigger directly on theoptions argument. For example, an HTTP trigger allows you to specify a route. During execution, the value corresponding to this trigger is passed in as the first argument to your handler.
const { app } = require('@azure/functions');app.http('helloWorld1', { route: 'hello/world', handler: async (request, context) => { ... }});The return output is optional, and in some cases configured by default. For example, an HTTP trigger registered withapp.http is configured to return an HTTP response output automatically. For most output types, you specify the return configuration on theoptions argument with the help of theoutput object exported from the@azure/functions module. During execution, you set this output by returning it from your handler.
The following example uses atimer trigger and astorage queue output:
const { app, output } = require('@azure/functions');app.timer('timerTrigger1', { schedule: '0 */5 * * * *', return: output.storageQueue({ connection: 'storage_APPSETTING', ... }), handler: (myTimer, context) => { return { hello: 'world' } }});In addition to the trigger and return, you might specify extra inputs or outputs on theoptions argument when registering a function. Theinput andoutput objects exported from the@azure/functions module provide type-specific methods to help construct the configuration. During execution, you get or set the values withcontext.extraInputs.get orcontext.extraOutputs.set, passing in the original configuration object as the first argument.
The following example is a function triggered by astorage queue, with an extrastorage blob input that is copied to an extrastorage blob output. The queue message should be the name of a file and replaces{queueTrigger} as the blob name to be copied, with the help of abinding expression.
const { app, input, output } = require("@azure/functions");const blobInput = input.storageBlob({ connection: "storage_APPSETTING", path: "helloworld/{queueTrigger}",});const blobOutput = output.storageBlob({ connection: "storage_APPSETTING", path: "helloworld/{queueTrigger}-copy",});app.storageQueue("copyBlob1", { queueName: "copyblobqueue", connection: "storage_APPSETTING", extraInputs: [blobInput], extraOutputs: [blobOutput], handler: (queueItem, context) => { const blobInputValue = context.extraInputs.get(blobInput); context.extraOutputs.set(blobOutput, blobInputValue); },});Theapp,trigger,input, andoutput objects exported by the@azure/functions module provide type-specific methods for most types. For all the types that aren't supported, ageneric method is provided to allow you to manually specify the configuration. Thegeneric method can also be used if you want to change the default settings provided by a type-specific method.
The following example is a simple HTTP triggered function using generic methods instead of type-specific methods.
const { app, output, trigger } = require("@azure/functions");app.generic("helloWorld1", { trigger: trigger.generic({ type: "httpTrigger", methods: ["GET", "POST"], }), return: output.generic({ type: "http", }), handler: async (request, context) => { context.log(`Http function processed request for url "${request.url}"`); return { body: `Hello, world!` }; },});The new SDK bindings capability in Azure Functions allows you to work directly with the Azure SDK types likeBlobClient andContainerClient instead of raw data. This provides full access to all SDK methods when working with blobs.
@azure/functions-extensions-blob extension preview packages to thepackage.json file in the project, which should include at least these packages:{ "dependencies": { "@azure/functions": "4.7.1-preview", "@azure/functions-extensions-blob": "0.1.0-preview" }}index.tsimport { app } from "@azure/functions";app.setup({ enableHttpStream: true,});BlobClient Example
This example shows how to get the BlobClient from both a Storage Blob trigger and from the input binding on an HTTP trigger:
import "@azure/functions-extensions-blob"; // NOTE: Ensure this line is at the top of your functions file.import { StorageBlobClient } from "@azure/functions-extensions-blob";import { app, InvocationContext } from "@azure/functions";export async function storageBlobTrigger( blobStorageClient: StorageBlobClient, // SDK binding provides this client context: InvocationContext): Promise<void> { context.log(`Blob trigger processing: ${context.triggerMetadata.name}`); // Access to full SDK capabilities const blobProperties = await blobStorageClient.blobClient.getProperties(); context.log(`Blob size: ${blobProperties.contentLength}`); // Download blob content const downloadResponse = await blobStorageClient.blobClient.download(); context.log(`Content: ${downloadResponse}`);}// Register the functionapp.storageBlob("storageBlobTrigger", { path: "snippets/{name}", connection: "AzureWebJobsStorage", sdkBinding: true, // Enable SDK binding handler: storageBlobTrigger,});EnsureAzureWebJobsStorage connection string is defined inlocal.setting.json while testing locally. For testing local emulator use followinglocal.settings.json
{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "UseDevelopmentStorage=true", "FUNCTIONS_WORKER_RUNTIME": "node" }}ContainerClient Example with Input BindingFollowing example shows how to get theContainerClient from both a Storage Blob input binding using an HTTP trigger:
import "@azure/functions-extensions-blob"; // NOTE: Ensure this line is at the top of your functions file.import { StorageBlobClient } from "@azure/functions-extensions-blob";import { app, InvocationContext } from "@azure/functions";const blobInput = input.storageBlob({ path: "snippets", connection: "AzureWebJobsStorage", sdkBinding: true, // Enable SDK binding});export async function listBlobs( request: HttpRequest, context: InvocationContext): Promise<HttpResponseInit> { // Get input binding for a specific container const storageBlobClient = context.extraInputs.get( blobInput ) as StorageBlobClient; // List all blobs in the container const blobs = []; for await (const blob of storageBlobClient.containerClient.listBlobsFlat()) { blobs.push(blob.name); } return { jsonBody: { blobs } };}app.http("listBlobs", { methods: ["GET"], authLevel: "function", extraInputs: [blobInput], handler: listBlobs,});Check out theBlob Storage SDK Bindings for Node.js Samplesfor more examples on how to incorporate SDK Bindings for Blob into your function app.
Each invocation of your function is passed an invocationcontext object, used to read inputs, set outputs, write to logs, and read various metadata. In the v3 model, the context object is always the first argument passed to your handler.
Thecontext object has the following properties:
| Property | Description |
|---|---|
invocationId | The ID of the current function invocation. |
executionContext | Seeexecution context. |
bindings | Seebindings. |
bindingData | Metadata about the trigger input for this invocation, not including the value itself. For example, anevent hub trigger has anenqueuedTimeUtc property. |
traceContext | The context for distributed tracing. For more information, seeTrace Context. |
bindingDefinitions | The configuration of your inputs and outputs, as defined infunction.json. |
req | SeeHTTP request. |
res | SeeHTTP response. |
Thecontext.executionContext object has the following properties:
| Property | Description |
|---|---|
invocationId | The ID of the current function invocation. |
functionName | The name of the function that is being invoked. The name of the folder containing thefunction.json file determines the name of the function. |
functionDirectory | The folder containing thefunction.json file. |
retryContext | Seeretry context. |
Thecontext.executionContext.retryContext object has the following properties:
| Property | Description |
|---|---|
retryCount | A number representing the current retry attempt. |
maxRetryCount | Maximum number of times an execution is retried. A value of-1 means to retry indefinitely. |
exception | Exception that caused the retry. |
Thecontext.bindings object is used to read inputs or set outputs. The following example is astorage queue trigger, which usescontext.bindings to copy astorage blob input to astorage blob output. The queue message's content replaces{queueTrigger} as the file name to be copied, with the help of abinding expression.
{ "name": "myQueueItem", "type": "queueTrigger", "direction": "in", "connection": "storage_APPSETTING", "queueName": "helloworldqueue"},{ "name": "myInput", "type": "blob", "direction": "in", "connection": "storage_APPSETTING", "path": "helloworld/{queueTrigger}"},{ "name": "myOutput", "type": "blob", "direction": "out", "connection": "storage_APPSETTING", "path": "helloworld/{queueTrigger}-copy"}module.exports = async function (context, myQueueItem) { const blobValue = context.bindings.myInput; context.bindings.myOutput = blobValue;};Thecontext.done method is deprecated. Before async functions were supported, you would signal your function is done by callingcontext.done():
module.exports = function (context, request) { context.log("this pattern is now deprecated"); context.done();};We recommend that you remove the call tocontext.done() and mark your function as async so that it returns a promise (even if you don'tawait anything). As soon as your function finishes (in other words, the returned promise resolves), the v3 model knows your function is done.
module.exports = async function (context, request) { context.log("you don't need context.done or an awaited call");};Each invocation of your function is passed an invocationcontext object, with information about your invocation and methods used for logging. In the v4 model, thecontext object is typically the second argument passed to your handler.
TheInvocationContext class has the following properties:
| Property | Description |
|---|---|
invocationId | The ID of the current function invocation. |
functionName | The name of the function. |
extraInputs | Used to get the values of extra inputs. For more information, seeextra inputs and outputs. |
extraOutputs | Used to set the values of extra outputs. For more information, seeextra inputs and outputs. |
retryContext | Seeretry context. |
traceContext | The context for distributed tracing. For more information, seeTrace Context. |
triggerMetadata | Metadata about the trigger input for this invocation, not including the value itself. For example, anevent hub trigger has anenqueuedTimeUtc property. |
options | The options used when registering the function, after they've been validated and with defaults explicitly specified. |
TheretryContext object has the following properties:
| Property | Description |
|---|---|
retryCount | A number representing the current retry attempt. |
maxRetryCount | Maximum number of times an execution is retried. A value of-1 means to retry indefinitely. |
exception | Exception that caused the retry. |
For more information, seeretry-policies.
In Azure Functions, it's recommended to usecontext.log() to write logs. Azure Functions integrates with Azure Application Insights to better capture your function app logs. Application Insights, part of Azure Monitor, provides facilities for collection, visual rendering, and analysis of both application logs and your trace outputs. To learn more, seemonitoring Azure Functions.
Note
If you use the alternative Node.jsconsole.log method, those logs are tracked at the app-level and willnot be associated with any specific function. Wehighly recommend that your usecontext for logging instead ofconsole so that all logs are associated with a specific function.
The following example writes a log at the default "information" level, including the invocation ID:
context.log(`Something has happened. Invocation ID: "${context.invocationId}"`);In addition to the defaultcontext.log method, the following methods are available that let you write logs at specific levels:
| Method | Description |
|---|---|
context.log.error() | Writes an error-level event to the logs. |
context.log.warn() | Writes a warning-level event to the logs. |
context.log.info() | Writes an information-level event to the logs. |
context.log.verbose() | Writes a trace-level event to the logs. |
| Method | Description |
|---|---|
context.trace() | Writes a trace-level event to the logs. |
context.debug() | Writes a debug-level event to the logs. |
context.info() | Writes an information-level event to the logs. |
context.warn() | Writes a warning-level event to the logs. |
context.error() | Writes an error-level event to the logs. |
Azure Functions lets you define the threshold level to be used when tracking and viewing logs. To set the threshold, use thelogging.logLevel property in thehost.json file. This property lets you define a default level applied to all functions, or a threshold for each individual function. To learn more, seeHow to configure monitoring for Azure Functions.
By default, Azure Functions writes output as traces to Application Insights. For more control, you can instead use theApplication Insights Node.js SDK to send custom data to your Application Insights instance.
const appInsights = require("applicationinsights");appInsights.setup();const client = appInsights.defaultClient;module.exports = async function (context, request) { // Use this with 'tagOverrides' to correlate custom logs to the parent function invocation. var operationIdOverride = { "ai.operation.id": context.traceContext.traceparent, }; client.trackEvent({ name: "my custom event", tagOverrides: operationIdOverride, properties: { customProperty2: "custom property value" }, }); client.trackException({ exception: new Error("handled exceptions can be logged with this method"), tagOverrides: operationIdOverride, }); client.trackMetric({ name: "custom metric", value: 3, tagOverrides: operationIdOverride, }); client.trackTrace({ message: "trace message", tagOverrides: operationIdOverride, }); client.trackDependency({ target: "http://dbname", name: "select customers proc", data: "SELECT * FROM Customers", duration: 231, resultCode: 0, success: true, dependencyTypeName: "ZSQL", tagOverrides: operationIdOverride, }); client.trackRequest({ name: "GET /customers", url: "http://myserver/customers", duration: 309, resultCode: 200, success: true, tagOverrides: operationIdOverride, });};ThetagOverrides parameter sets theoperation_Id to the function's invocation ID. This setting enables you to correlate all of the automatically generated and custom logs for a given function invocation.
HTTP and webhook triggers use request and response objects to represent HTTP messages.
HTTP and webhook triggers useHttpRequest andHttpResponse objects to represent HTTP messages. The classes represent a subset of thefetch standard, using Node.js'sundici package.
The request can be accessed in several ways:
As the second argument to your function:
module.exports = async function (context, request) { context.log(`Http function processed request for url "${request.url}"`);From thecontext.req property:
module.exports = async function (context, request) { context.log(`Http function processed request for url "${context.req.url}"`);From the named input bindings: This option works the same as any non HTTP binding. The binding name infunction.json must match the key oncontext.bindings, or "request1" in the following example:
{ "name": "request1", "type": "httpTrigger", "direction": "in", "authLevel": "anonymous", "methods": ["get", "post"]}module.exports = async function (context, request) { context.log(`Http function processed request for url "${context.bindings.request1.url}"`);TheHttpRequest object has the following properties:
| Property | Type | Description |
|---|---|---|
method | string | HTTP request method used to invoke this function. |
url | string | Request URL. |
headers | Record<string, string> | HTTP request headers. This object is case sensitive. It's recommended to userequest.getHeader('header-name') instead, which is case insensitive. |
query | Record<string, string> | Query string parameter keys and values from the URL. |
params | Record<string, string> | Route parameter keys and values. |
user | HttpRequestUser | null | Object representing logged-in user, either through Functions authentication, SWA Authentication, or null when no such user is logged in. |
body | Buffer | string | any | If the media type is "application/octet-stream" or "multipart/*",body is a Buffer. If the value is a JSON parse-able string,body is the parsed object. Otherwise,body is a string. |
rawBody | string | The body as a string. Despite the name, this property doesn't return a Buffer. |
bufferBody | Buffer | The body as a buffer. |
The request can be accessed as the first argument to your handler for an HTTP triggered function.
async (request, context) => { context.log(`Http function processed request for url "${request.url}"`);TheHttpRequest object has the following properties:
| Property | Type | Description |
|---|---|---|
method | string | HTTP request method used to invoke this function. |
url | string | Request URL. |
headers | Headers | HTTP request headers. |
query | URLSearchParams | Query string parameter keys and values from the URL. |
params | Record<string, string> | Route parameter keys and values. |
user | HttpRequestUser | null | Object representing logged-in user, either through Functions authentication, SWA Authentication, or null when no such user is logged in. |
body | ReadableStream | null | Body as a readable stream. |
bodyUsed | boolean | A boolean indicating if the body is already read. |
In order to access a request or response's body, the following methods can be used:
| Method | Return Type |
|---|---|
arrayBuffer() | Promise<ArrayBuffer> |
blob() | Promise<Blob> |
formData() | Promise<FormData> |
json() | Promise<unknown> |
text() | Promise<string> |
Note
The body functions can be run only once. Subsequent calls resolve with empty strings/ArrayBuffers.
The response can be set in several ways:
Set thecontext.res property:
module.exports = async function (context, request) { context.res = { body: `Hello, world!` };Return the response: If your function is async and you set the binding name to$return in yourfunction.json, you can return the response directly instead of setting it oncontext.
{ "type": "http", "direction": "out", "name": "$return"}module.exports = async function (context, request) { return { body: `Hello, world!` };Set the named output binding: This option works the same as any non HTTP binding. The binding name infunction.json must match the key oncontext.bindings, or "response1" in the following example:
{ "type": "http", "direction": "out", "name": "response1"}module.exports = async function (context, request) { context.bindings.response1 = { body: `Hello, world!` };Callcontext.res.send(): This option is deprecated. It implicitly callscontext.done() and can't be used in an async function.
module.exports = function (context, request) { context.res.send(`Hello, world!`);If you create a new object when setting the response, that object must match theHttpResponseSimple interface, which has the following properties:
| Property | Type | Description |
|---|---|---|
headers | Record<string, string> (optional) | HTTP response headers. |
cookies | Cookie[] (optional) | HTTP response cookies. |
body | any (optional) | HTTP response body. |
statusCode | number (optional) | HTTP response status code. If not set, defaults to200. |
status | number (optional) | The same asstatusCode. This property is ignored ifstatusCode is set. |
You can also modify thecontext.res object without overwriting it. The defaultcontext.res object uses theHttpResponseFull interface, which supports the following methods in addition to theHttpResponseSimple properties:
| Method | Description |
|---|---|
status() | Sets the status. |
setHeader() | Sets a header field. NOTE:res.set() andres.header() are also supported and do the same thing. |
getHeader() | Get a header field. NOTE:res.get() is also supported and does the same thing. |
removeHeader() | Removes a header. |
type() | Sets the "content-type" header. |
send() | This method is deprecated. It sets the body and callscontext.done() to indicate a sync function is finished. NOTE:res.end() is also supported and does the same thing. |
sendStatus() | This method is deprecated. It sets the status code and callscontext.done() to indicate a sync function is finished. |
json() | This method is deprecated. It sets the "content-type" to "application/json", sets the body, and callscontext.done() to indicate a sync function is finished. |
The response can be set in several ways:
As a simple interface with typeHttpResponseInit: This option is the most concise way of returning responses.
return { body: `Hello, world!` };TheHttpResponseInit interface has the following properties:
| Property | Type | Description |
|---|---|---|
body | BodyInit (optional) | HTTP response body as one ofArrayBuffer,AsyncIterable<Uint8Array>,Blob,FormData,Iterable<Uint8Array>,NodeJS.ArrayBufferView,URLSearchParams,null, orstring. |
jsonBody | any (optional) | A JSON-serializable HTTP Response body. If set, theHttpResponseInit.body property is ignored in favor of this property. |
status | number (optional) | HTTP response status code. If not set, defaults to200. |
headers | HeadersInit (optional) | HTTP response headers. |
cookies | Cookie[] (optional) | HTTP response cookies. |
As a class with typeHttpResponse: This option provides helper methods for reading and modifying various parts of the response like the headers.
const response = new HttpResponse({ body: `Hello, world!` });response.headers.set("content-type", "application/json");return response;TheHttpResponse class accepts an optionalHttpResponseInit as an argument to its constructor and has the following properties:
| Property | Type | Description |
|---|---|---|
status | number | HTTP response status code. |
headers | Headers | HTTP response headers. |
cookies | Cookie[] | HTTP response cookies. |
body | ReadableStream | null | Body as a readable stream. |
bodyUsed | boolean | A boolean indicating if the body has been read from already. |
HTTP streams is a feature that makes it easier to process large data, stream OpenAI responses, deliver dynamic content, and support other core HTTP scenarios. It lets you stream requests to and responses from HTTP endpoints in your Node.js function app. Use HTTP streams in scenarios where your app requires real-time exchange and interaction between client and server over HTTP. You can also use HTTP streams to get the best performance and reliability for your apps when using HTTP.
Important
HTTP streams aren't supported in the v3 model.Upgrade to the v4 model to use the HTTP streaming feature.The existingHttpRequest andHttpResponse types in programming model v4 already support various ways of handling the message body, including as a stream.
@azure/functions npm package version 4.3.0 or later.Use these steps to enable HTTP streams in your function app in Azure and in your local projects:
If you plan to stream large amounts of data, modify theFUNCTIONS_REQUEST_BODY_SIZE_LIMIT setting in Azure. The default maximum body size allowed is104857600, which limits your requests to a size of ~100 MB.
For local development, also addFUNCTIONS_REQUEST_BODY_SIZE_LIMIT to thelocal.settings.json file.
Add the following code to your app in any file included by yourmain field.
const { app } = require("@azure/functions");app.setup({ enableHttpStream: true });This example shows an HTTP triggered function that receives data via an HTTP POST request, and the function streams this data to a specified output file:
const { app } = require('@azure/functions');const { createWriteStream } = require('fs');const { Writable } = require('stream');app.http('httpTriggerStreamRequest', { methods: ['POST'], authLevel: 'anonymous', handler: async (request, context) => { const writeStream = createWriteStream('<output file path>'); await request.body.pipeTo(Writable.toWeb(writeStream)); return { body: 'Done!' }; },});This example shows an HTTP triggered function that streams a file's content as the response to incoming HTTP GET requests:
const { app } = require('@azure/functions');const { createReadStream } = require('fs');app.http('httpTriggerStreamResponse', { methods: ['GET'], authLevel: 'anonymous', handler: async (request, context) => { const body = createReadStream('<input file path>'); return { body }; },});For a ready-to-run sample app using streams, check out this example onGitHub.
request.body to obtain the maximum benefit from using streams. You can still continue to use methods likerequest.text(), which always return the body as a string.Hooks aren't supported in the v3 model.Upgrade to the v4 model to use hooks.
Use a hook to execute code at different points in the Azure Functions lifecycle. Hooks are executed in the order they're registered and can be registered from any file in your app. There are currently two scopes of hooks, "app" level and "invocation" level.
Invocation hooks are executed once per invocation of your function, either before in apreInvocation hook or after in apostInvocation hook. By default your hook executes for all trigger types, but you can also filter by type. The following example shows how to register an invocation hook and filter by trigger type:
const { app } = require('@azure/functions');app.hook.preInvocation((context) => { if (context.invocationContext.options.trigger.type === 'httpTrigger') { context.invocationContext.log( `preInvocation hook executed for http function ${context.invocationContext.functionName}` ); }});app.hook.postInvocation((context) => { if (context.invocationContext.options.trigger.type === 'httpTrigger') { context.invocationContext.log( `postInvocation hook executed for http function ${context.invocationContext.functionName}` ); }});The first argument to the hook handler is a context object specific to that hook type.
ThePreInvocationContext object has the following properties:
| Property | Description |
|---|---|
inputs | The arguments passed to the invocation. |
functionHandler | The function handler for the invocation. Changes to this value affect the function itself. |
invocationContext | Theinvocation context object passed to the function. |
hookData | The recommended place to store and share data between hooks in the same scope. You should use a unique property name so that it doesn't conflict with other hooks' data. |
ThePostInvocationContext object has the following properties:
| Property | Description |
|---|---|
inputs | The arguments passed to the invocation. |
result | The result of the function. Changes to this value affect the overall result of the function. |
error | The error thrown by the function, or null/undefined if there's no error. Changes to this value affect the overall result of the function. |
invocationContext | Theinvocation context object passed to the function. |
hookData | The recommended place to store and share data between hooks in the same scope. You should use a unique property name so that it doesn't conflict with other hooks' data. |
App hooks are executed once per instance of your app, either during startup in anappStart hook or during termination in anappTerminate hook. App terminate hooks have a limited time to execute and don't execute in all scenarios.
The Azure Functions runtime currentlydoesn't support context logging outside of an invocation. Use the Application Insightsnpm package to log data during app level hooks.
The following example registers app hooks:
const { app } = require('@azure/functions');app.hook.appStart((context) => { // add your logic here});app.hook.appTerminate((context) => { // add your logic here});The first argument to the hook handler is a context object specific to that hook type.
TheAppStartContext object has the following properties:
| Property | Description |
|---|---|
hookData | The recommended place to store and share data between hooks in the same scope. You should use a unique property name so that it doesn't conflict with other hooks' data. |
TheAppTerminateContext object has the following properties:
| Property | Description |
|---|---|
hookData | The recommended place to store and share data between hooks in the same scope. You should use a unique property name so that it doesn't conflict with other hooks' data. |
By default, Azure Functions automatically monitors the load on your application and creates more host instances for Node.js as needed. Azure Functions uses built-in (not user configurable) thresholds for different trigger types to decide when to add instances, such as the age of messages and queue size for QueueTrigger. For more information, seeHow the Consumption and Premium plans work.
This scaling behavior is sufficient for many Node.js applications. For CPU-bound applications, you can improve performance further by using multiple language worker processes. You can increase the number of worker processes per host from the default of 1 up to a max of 10 by using theFUNCTIONS_WORKER_PROCESS_COUNT application setting. Azure Functions then tries to evenly distribute simultaneous function invocations across these workers. This behavior makes it less likely that a CPU-intensive function blocks other functions from running. The setting applies to each host that Azure Functions creates when scaling out your application to meet demand.
Warning
Use theFUNCTIONS_WORKER_PROCESS_COUNT setting with caution. Multiple processes running in the same instance can lead to unpredictable behavior and increase function load times. If you use this setting, wehighly recommend that you offset these downsides byrunning from a package file.
You can see the current version that the runtime is using by loggingprocess.version from any function. Seesupported versions for a list of Node.js versions supported by each programming model.
The way that you upgrade your Node.js version depends on the OS on which your function app runs.
When it runs on Windows, the Node.js version is set by theWEBSITE_NODE_DEFAULT_VERSION application setting. This setting can be updated either by using the Azure CLI or in the Azure portal.
For more information about Node.js versions, seeSupported versions.
Before upgrading your Node.js version, make sure your function app is running on the latest version of the Azure Functions runtime. If you need to upgrade your runtime version, seeMigrate apps from Azure Functions version 3.x to version 4.x.
Run the Azure CLIaz functionapp config appsettings set command to update the Node.js version for your function app running on Windows:
az functionapp config appsettings set --settings WEBSITE_NODE_DEFAULT_VERSION=~22 \ --name <FUNCTION_APP_NAME> --resource-group <RESOURCE_GROUP_NAME>This sets theWEBSITE_NODE_DEFAULT_VERSION application setting the supported LTS version of~22.
After changes are made, your function app restarts. To learn more about Functions support for Node.js, seeLanguage runtime support policy.
Environment variables can be useful for operational secrets (connection strings, keys, endpoints, etc.) or environmental settings such as profiling variables. You can add environment variables in both your local and cloud environments and access them throughprocess.env in your function code.
The following example logs theWEBSITE_SITE_NAME environment variable:
module.exports = async function (context) { context.log(`WEBSITE_SITE_NAME: ${process.env["WEBSITE_SITE_NAME"]}`);};async function timerTrigger1(myTimer, context) { context.log(`WEBSITE_SITE_NAME: ${process.env["WEBSITE_SITE_NAME"]}`);}When you run locally, your functions project includes alocal.settings.json file, where you store your environment variables in theValues object.
{ "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "", "FUNCTIONS_WORKER_RUNTIME": "node", "CUSTOM_ENV_VAR_1": "hello", "CUSTOM_ENV_VAR_2": "world" }}When you run in Azure, the function app lets you set and useApplication settings, such as service connection strings, and exposes these settings as environment variables during execution.
There are several ways that you can add, update, and delete function app settings:
Changes to function app settings require your function app to be restarted.
There are several Functions environment variables specific to Node.js:
This setting allows you to specify custom arguments when starting your Node.js process. It's most often used locally to start the worker in debug mode, but can also be used in Azure if you need custom arguments.
Warning
If possible, avoid usinglanguageWorkers__node__arguments in Azure because it can have a negative effect on cold start times. Rather than using prewarmed workers, the runtime has to start a new worker from scratch with your custom arguments.
This setting adjusts the default log level for Node.js-specific worker logs. By default, only warning or error logs are shown, but you can set it toinformation ordebug to help diagnose issues with the Node.js worker. For more information, seeconfiguring log levels.
Note
As ECMAScript modules are currently a preview feature in Node.js 14 or higher in Azure Functions.
ECMAScript modules (ES modules) are the new official standard module system for Node.js. So far, the code samples in this article use the CommonJS syntax. When running Azure Functions in Node.js 14 or higher, you can choose to write your functions using ES modules syntax.
To use ES modules in a function, change its filename to use a.mjs extension. The followingindex.mjs file example is an HTTP triggered function that uses ES modules syntax to import theuuid library and return a value.
import { v4 as uuidv4 } from "uuid";async function httpTrigger1(context, request) { context.res.body = uuidv4();}export default httpTrigger;import { v4 as uuidv4 } from "uuid";async function httpTrigger1(request, context) { return { body: uuidv4() };}app.http("httpTrigger1", { methods: ["GET", "POST"], handler: httpTrigger1,});Thefunction.json propertiesscriptFile andentryPoint can be used to configure the location and name of your exported function. ThescriptFile property is required when you're using TypeScript and should point to the compiled JavaScript.
scriptFileBy default, a JavaScript function is executed fromindex.js, a file that shares the same parent directory as its correspondingfunction.json.
scriptFile can be used to get a folder structure that looks like the following example:
<project_root>/ | - node_modules/ | - myFirstFunction/ | | - function.json | - lib/ | | - sayHello.js | - host.json | - package.jsonThefunction.json formyFirstFunction should include ascriptFile property pointing to the file with the exported function to run.
{ "scriptFile": "../lib/sayHello.js", "bindings": [ ... ]}entryPointIn the v3 model, a function must be exported usingmodule.exports in order to be found and run. By default, the function that executes when triggered is the only export from that file, the export namedrun, or the export namedindex. The following example setsentryPoint infunction.json to a custom value, "logHello":
{ "entryPoint": "logHello", "bindings": [ ... ]}async function logHello(context) { context.log("Hello, world!");}module.exports = { logHello };We recommend that you use VS Code for local debugging, which starts your Node.js process in debug mode automatically and attaches to the process for you. For more information, seerun the function locally.
If you're using a different tool for debugging or want to start your Node.js process in debug mode manually, add"languageWorkers__node__arguments": "--inspect" underValues in yourlocal.settings.json. The--inspect argument tells Node.js to listen for a debug client, on port 9229 by default. For more information, see theNode.js debugging guide.
This section describes several impactful patterns for Node.js apps that we recommend you follow.
When you create a function app that uses the App Service plan, we recommend that you select a single-vCPU plan rather than a plan with multiple vCPUs. Today, Functions runs Node.js functions more efficiently on single-vCPU VMs, and using larger VMs doesn't produce the expected performance improvements. When necessary, you can manually scale out by adding more single-vCPU VM instances, or you can enable autoscale. For more information, seeScale instance count manually or automatically.
When you develop Azure Functions in the serverless hosting model, cold starts are a reality.Cold start refers to the first time your function app starts after a period of inactivity, taking longer to start up. For Node.js apps with large dependency trees in particular, cold start can be significant. To speed up the cold start process,run your functions as a package file when possible. Many deployment methods use this model by default, but if you're experiencing large cold starts you should check to make sure you're running this way.
When you use a service-specific client in an Azure Functions application, don't create a new client with every function invocation because you can hit connection limits. Instead, create a single, static client in the global scope. For more information, seemanaging connections in Azure Functions.
async andawaitWhen writing Azure Functions in Node.js, you should write code using theasync andawait keywords. Writing code usingasync andawait instead of callbacks or.then and.catch with Promises helps avoid two common problems:
context.log, caused by asynchronous calls that aren't properly awaited.In the following example, the asynchronous methodfs.readFile is invoked with an error-first callback function as its second parameter. This code causes both of the issues previously mentioned. An exception that isn't explicitly caught in the correct scope can crash the entire process (issue #1). Returning without ensuring the callback finishes means the http response sometimes has an empty body (issue #2).
// DO NOT USE THIS CODEconst { app } = require('@azure/functions');const fs = require('fs');app.http('httpTriggerBadAsync', { methods: ['GET', 'POST'], authLevel: 'anonymous', handler: async (request, context) => { let fileData; fs.readFile('./helloWorld.txt', (err, data) => { if (err) { context.error(err); // BUG #1: This will result in an uncaught exception that crashes the entire process throw err; } fileData = data; }); // BUG #2: fileData is not guaranteed to be set before the invocation ends return { body: fileData }; },});In the following example, the asynchronous methodfs.readFile is invoked with an error-first callback function as its second parameter. This code causes both of the issues previously mentioned. An exception that isn't explicitly caught in the correct scope can crash the entire process (issue #1). Calling the deprecatedcontext.done() method outside of the scope of the callback can signal the function is finished before the file is read (issue #2). In this example, callingcontext.done() too early results in missing log entries starting withData from file:.
// NOT RECOMMENDED PATTERNconst fs = require("fs");module.exports = function (context) { fs.readFile("./hello.txt", (err, data) => { if (err) { context.log.error("ERROR", err); // BUG #1: This will result in an uncaught exception that crashes the entire process throw err; } context.log(`Data from file: ${data}`); // context.done() should be called here }); // BUG #2: Data is not guaranteed to be read before the Azure Function's invocation ends context.done();};Use theasync andawait keywords to help avoid both of these issues. Most APIs in the Node.js ecosystem have been converted to support promises in some form. For example, starting in v14, Node.js provides anfs/promises API to replace thefs callback API.
In the following example, any unhandled exceptions thrown during the function execution only fail the individual invocation that raised the exception. Theawait keyword means that steps followingreadFile only execute after it's complete.
// Recommended patternconst { app } = require('@azure/functions');const fs = require('fs/promises');app.http('httpTriggerGoodAsync', { methods: ['GET', 'POST'], authLevel: 'anonymous', handler: async (request, context) => { try { const fileData = await fs.readFile('./helloWorld.txt'); return { body: fileData }; } catch (err) { context.error(err); // This rethrown exception will only fail the individual invocation, instead of crashing the whole process throw err; } },});Withasync andawait, you also don't need to call thecontext.done() callback.
// Recommended patternconst fs = require("fs/promises");module.exports = async function (context) { let data; try { data = await fs.readFile("./hello.txt"); } catch (err) { context.log.error("ERROR", err); // This rethrown exception will be handled by the Functions Runtime and will only fail the individual invocation throw err; } context.log(`Data from file: ${data}`);};See theNode.js Troubleshoot guide.
For more information, see the following resources:
Was this page helpful?
Need help with this topic?
Want to try using Ask Learn to clarify or guide you through this topic?
Was this page helpful?
Want to try using Ask Learn to clarify or guide you through this topic?