Manage functions

You can deploy, delete, and modify functions usingFirebase CLI commandsor by setting runtime options in your functions source code.

Deploy functions

To deploy functions, run thisFirebase CLI command:

firebase deploy --only functions

By default, theFirebase CLI deploys all of the functions insideyour source at the same time. If your project contains more than 5 functions,we recommend that you use the--only flag with specific function namesto deploy only the functions thatyou've edited.Deploying specific functionsthis way speeds up the deployment process and helps you avoid running intodeployment quotas. For example:

firebase deploy --only functions:addMessage,functions:makeUppercase

When deploying large numbers of functions, you may exceed thestandard quota and receive HTTP 429 or 500 error messages. To solvethis, deploy functions in groups of 10 or fewer.

See theFirebase CLI reference for the full list of availablecommands.

By default, theFirebase CLI looks in thefunctions/ folder for thesource code. If you prefer, you canorganize functionsin codebases or multiple sets of files.

Clean up deployment artifacts

As part of functions deployment, container images are generated and stored inArtifact Registry. These images are not required for your deployed functions to run;Cloud Functions fetches and retains a copy of the image on the initialdeployment, but the stored artifacts are not necessary for the function to workat runtime.

While these container images are often small, they can accumulate over time andcontribute to your storage costs. You might prefer to retain them for a periodof time if you're planning to inspect the built artifacts or run containervulnerability scans.

To help manage storage costs,Firebase CLI 14.0.0 and higher lets youconfigure anArtifact Registry cleanuppolicyfor repositories that store deployment artifacts after each function deployment.

You can manually set up or edit a cleanup policy using thefunctions:artifacts:setpolicy command:

firebasefunctions:artifacts:setpolicy

By default, this command configuresArtifact Registry to automatically deletecontainer images older than 1 day. This provides a reasonable balance betweenminimizing storage costs and allowing for potential inspection of recent builds.

You can customize the retention period using the--days option:

firebasefunctions:artifacts:setpolicy--days7# Delete images older than 7 days

If you deploy functions to multiple regions, you can set up a cleanup policy fora specific location using the--location option:

$firebasefunctions:artifacts:setpolicy--locationeurope-west1

Opt out of artifact cleanup

If you prefer to manage image cleanup manually, or if you do not want any imagesdeleted, you can opt out of cleanup policies entirely:

$firebasefunctions:artifacts:setpolicy--none

This command removes any existing cleanup policy that theFirebase CLI has set up and prevents Firebase from setting up a cleanup policy after functiondeployments.

Delete functions

You can delete previously deployed functions in these ways:

  • explicitly in theFirebase CLI withfunctions:delete
  • explicitly in theGoogle Cloud console.
  • implicitly by removing the function from source prior to deployment.

All deletion operationsprompt you to confirm before removing the function from production.

Explicit function deletion in theFirebase CLI supports multiple argumentsas well as functionsgroups, and allows you to specify a function running in a particular region.Also, you can override the confirmation prompt.

  • Deletes all functions that match the specified name in all regions:

    firebase functions:deleteFUNCTION-1_NAME

  • Deletes a specified function running in a non-default region:

    firebase functions:deleteFUNCTION-1_NAME --regionREGION_NAME

  • Deletes more than one function:

    firebase functions:deleteFUNCTION-1_NAMEFUNCTION-2_NAME

  • Deletes a specified functions group:

    firebase functions:deleteGROUP_NAME

  • Bypasses the confirmation prompt:

    firebase functions:deleteFUNCTION-1_NAME --force

With implicit function deletion,firebase deploy parses your source andremoves from production any functions that have been removed from the file.

Modify a function's name, region or trigger

If you are renaming or changing the regions or trigger for functions that arehandling production traffic, follow the steps in this section to avoid losingevents during modification. Before you follow these steps, first ensure that yourfunction isidempotent, sinceboth the new version and the old version of your function will be running at thesame time during the change.

Rename a function

To rename a function, create a new renamed version of the function in your sourceand then run two separate deployment commands. The first command deploys thenewly named function, and the second command removes the previously deployedversion. For example, if you have an HTTP-triggered webhook you'd like torename, revise the code as follows:

Node.js

// beforeconst{onRequest}=require('firebase-functions/v2/https');exports.webhook=onRequest((req,res)=>{res.send("Hello");});// afterconst{onRequest}=require('firebase-functions/v2/https');exports.webhookNew=onRequest((req,res)=>{res.send("Hello");});

Python

# beforefromfirebase_functionsimporthttps_fn@https_fn.on_request()defwebhook(req:https_fn.Request)->https_fn.Response:returnhttps_fn.Response("Hello world!")# afterfromfirebase_functionsimporthttps_fn@https_fn.on_request()defwebhook_new(req:https_fn.Request)->https_fn.Response:returnhttps_fn.Response("Hello world!")

Then run the following commands to deploy the new function:

# Deploy new functionfirebase deploy --only functions:webhookNew# Wait until deployment is done; now both functions are running# Delete webhookfirebase functions:delete webhook

Change a function's region or regions

If you are changing the specifiedregions for afunction that's handling production traffic, you can prevent event loss byperforming these steps in order:

  1. Rename the function, and change its region or regions as desired.
  2. Deploy the renamed function, which results in temporarily running the same code in both sets of regions.
  3. Delete the previous function.

For example, if you have aCloud Firestore-triggered functionthat is currently in thedefault functions region ofus-central1, and you want to migrate it toasia-northeast1, you need to first modify your source code to rename thefunction and revise the region.

Node.js

// beforeexports.firestoreTrigger=onDocumentCreated("my-collection/{docId}",(event)=>{},);// afterexports.firestoreTriggerAsia=onDocumentCreated({document:"my-collection/{docId}",region:"asia-northeast1",},(event)=>{},);

The updated code should specify the correct event filter (in this casedocument) along with the region. SeeCloud Functions locations for more information.

Python

# Before@firestore_fn.on_document_created("my-collection/{docId}")deffirestore_trigger(event):pass# After@firestore_fn.on_document_created("my-collection/{docId}",region="asia-northeast1")deffirestore_trigger_asia(event):pass

Then deploy by running:

firebase deploy --only functions:firestoreTriggerAsia

Now there are two identical functions running:firestoreTrigger is running inus-central1, andfirestoreTriggerAsia is running inasia-northeast1.

Then, deletefirestoreTrigger:

firebase functions:delete firestoreTrigger

Now there is only one function -firestoreTriggerAsia, which is running inasia-northeast1.

Change a function's trigger type

As you develop yourCloud Functions for Firebase deployment over time, you mayneed to change a function's trigger type for various reasons. For example,you might want to change from one type ofFirebase Realtime Database orCloud Firestore event to another type.

It is not possible to change a function's event type by just changing thesource code and runningfirebase deploy. To avoid errors,change a function's trigger type by this procedure:

  1. Modify the source code to include a new function with the desired trigger type.
  2. Deploy the function, which results in temporarily running both the old and new functions.
  3. Explicitly delete the old function from production using theFirebase CLI.

For instance, if you had a function that was triggered when an object wasdeleted, but then you enabledobject versioningand would like to subscribe to the archive event instead, first rename thefunction and edit it to have the new trigger type.

Node.js

// beforeconst{onObjectDeleted}=require("firebase-functions/v2/storage");exports.objectDeleted=onObjectDeleted((event)=>{// ...});// afterconst{onObjectArchived}=require("firebase-functions/v2/storage");exports.objectArchived=onObjectArchived((event)=>{// ...});

Python

# beforefromfirebase_functionsimportstorage_fn@storage_fn.on_object_deleted()defobject_deleted(event):# ...# afterfromfirebase_functionsimportstorage_fn@storage_fn.on_object_archived()defobject_archived(event):# ...

Then run the following commands to create the new function first, before deleting the old function:

# Create new function objectArchivedfirebase deploy --only functions:objectArchived# Wait until deployment is done; now both objectDeleted and objectArchived are running# Delete objectDeletedfirebase functions:delete objectDeleted

Set runtime options

Cloud Functions for Firebase lets you select runtime options such as the Node.jsruntime version and per-function timeout, memory allocation, and minimum/maximumfunction instances.

As a best practice, these options (except for Node.js version) should be set ona configuration object inside the function code. ThisRuntimeOptionsobject is the source of truth for your function's runtime options, and willoverride options set via any other method (such as via the Google Cloud consoleor gcloud CLI).

If your development workflow involves manually setting runtime options viaGoogle Cloud console or gcloud CLI and youdon't want these values to beoverridden on each deploy, set thepreserveExternalChanges option totrue.With this option set totrue, Firebase merges the runtime options set in yourcode with the settings of the currently-deployed version of your function withthe following priority:

  1. Option is set in functions code: override external changes.
  2. Option is set toRESET_VALUE in functions code: override external changes with the default value.
  3. Option is not set in functions code, but is set in currently deployed function: use the option specified in the deployed function.

Using thepreserveExternalChanges: true option isnot recommendedfor most scenarios because yourcode will no longer be the full source of truth for runtime options for yourfunctions. If you do use it, check the Google Cloud console or use the gcloudCLI to view a function's full configuration.

Set Node.js version

TheFirebase SDK forCloud Functions allows a selection of Node.js runtime.You can choose to run all functions in a project exclusively on the runtimeenvironment corresponding to one of these supported Node.js versions:

  • Node.js 22
  • Node.js 20
  • Node.js 18 (deprecated)

Node.js Versions 14 and 16were decommissioned in early 2025. Deploymentwith these versions is disabled. See thesupport schedule forimportant information regarding ongoing support for these versions of Node.js.

To set the Node.js version:

You can set the version in theengines field in thepackage.jsonfile that was created in yourfunctions/ directory during initialization.For example, to use onlyversion 20, edit this line inpackage.json:

  "engines": {"node": "20"}

If you are using Yarn package manager or have other specific requirements fortheengines field, you can set the runtime for theFirebase SDK forCloud Functions infirebase.json instead:

  {    "functions": {      "runtime": "nodejs20" // or nodejs22    }  }

The CLI uses the value set infirebase.json in preference to any value orrange that you set separately inpackage.json.

Upgrade your Node.js runtime

To upgrade your Node.js runtime:

  1. Make sure your project is on theBlaze pricing plan.
  2. Make sure you are usingFirebase CLI v11.18.0 or later.
  3. Change theengines value in thepackage.json file that was created inyourfunctions/ directory during initialization.For example, if you are upgrading from version 18 to version 20, the entryshould look like this:"engines": {"node": "20"}
  4. Optionally, test your changes using theFirebase Local Emulator Suite.
  5. Redeploy all functions.

Choose a Node.js module system

The default module system in Node.js is CommonJS (CJS), but current Node.jsversions also support ECMAScript Modules (ESM).Cloud Functions supports both.

By default, your functions use CommonJS. That means imports and exports looklike this:

const{onRequest}=require("firebase-functions/https");exports.helloWorld=onRequest(async(req,res)=>res.send("Hello from Firebase!"));

To use ESM instead, set the"type": "module" field in yourpackage.json file:

{..."type":"module",...}

Once you set this, use ESMimport andexport syntax:

import{onRequest}from"firebase-functions/https";exportconsthelloWorld=onRequest(async(req,res)=>res.send("Hello from Firebase!"));

Both module systems are fully supported. You can choose whichever best fits your project.Learn more in theNode.js documentation on modules.

Set Python version

Firebase SDK forCloud Functions versions 12.0.0 and higher allow selection of the Pythonruntime. Set the runtime version infirebase.json as shown:

  {    "functions": {      "runtime": "python310" // or python311    }  }

Control scaling behavior

By default,Cloud Functions for Firebase scales the number of running instancesbased on the number of incoming requests, potentially scaling down to zeroinstances in times of reduced traffic. However, if your app requires reducedlatency and you want to limit the number of cold starts, you can change thisdefault behavior by specifying a minimum number of container instances to bekept warm and ready to serve requests.

Similarly, you can set a maximum number to limit the scaling of instances inresponse to incoming requests. Use this setting as a way to control your costsor to limit the number of connections to a backing service such as to adatabase.

Using these settings together with the per-instanceconcurrency setting (newin 2nd gen), you can control and tune the scaling behavior for your functions. Thenature of your application and function will determine which settings are mostcost effective and will result in the best performance.

For some apps with low traffic, a lower CPU option without multi-concurrency isoptimal. For others where cold starts are a critical issue, setting highconcurrency and minimum instances means that a set of instances are always keptwarm to handle large spikes in traffic.

For smaller-scale apps that receive very little traffic, setting low maximuminstances with high concurrency means that the app can handle bursts of trafficwithout incurring excessive costs. However, keep in mind that when maximuminstances is set too low, requests may be dropped when the ceiling is reached.

Allow concurrent requests

InCloud Functions for Firebase (1st gen), each instance could handle one request at a time, soscaling behavior was set only with minimum and maximum instances settings.In addition to controlling the number of instances, inCloud Functions for Firebase (2nd gen) youcan control the number of requests each instance can serve at the same time withtheconcurrency option. The default value for concurrency is 80, but you canset it to any integer between 1 and 1000.

Functions with higher concurrency settings can absorb spikes of traffic withoutcold starting because each instance likely has some headroom. If an instance isconfigured to handle up to 50 concurrent requests but is currently handling only25, it can handle a spike of 25 additional requests without requiring a newinstance to cold start. By contrast, with a concurrency setting of just 1, thatspike in requests could lead to 25 cold starts.

This simplified scenario demonstrates the potential efficiency gains ofconcurrency. In reality, scaling behavior to optimize efficiency and reducecold starts with concurrency is more complex. Concurrency inCloud Functions for Firebase 2nd gen is powered byCloud Run, and followsCloud Run's rules ofcontainer instance autoscaling.

Important: Concurrency is available only to functions with at least 1 full CPU.Setting a function's CPU to a fractional value or setting CPU togcf_gen1 fora function with less than 2GB RAM will disable concurrency.SeeOverride CPU defaults.

When experimenting with higher concurrency settings inCloud Functions for Firebase(2nd gen), keep the following in mind:

  • Higher concurrency settings may require higher CPU and RAM for optimalperformance until reaching a practical limit. A function that does heavyimage or video processing, for example, might lack the resources to handle1000 concurrent requests, even when its CPU and RAM settings are maximized.
  • SinceCloud Functions for Firebase (2nd gen) is powered byCloud Run, you canrefer also toGoogle Cloud guidance foroptimizing concurrency.
  • Make sure to test multiconcurrency thoroughly in a test environment beforeswitching to multiconcurrency in production.

Keep a minimum number of instances warm

You can set minimum number of instances for a function in source code.For example, this function sets a minimum of 5 instancesto keep warm:

Node.js

const{onCall}=require("firebase-functions/v2/https");exports.getAutocompleteResponse=onCall({// Keep 5 instances warm for this latency-critical functionminInstances:5,},(event)=>{// Autocomplete user’s search term});

Python

@https_fn.on_call(min_instances=5)defget_autocomplete_response(event:https_fn.CallableRequest)->https_fn.Response:
Note: A minimum number of instances kept running incur billing costs at idlerates. A function with the 1st gen default 256MiB memory allocation costs about $3/mo(withgcf_gen_1 CPU allocation and concurrency disabled) orabout $8/mo with 1CPU and concurrency enabled. TheFirebase CLI provides acost estimate at deployment time for functions with reserved minimum instances. RefertoCloud Run pricing tocalculate costs.

Here are some things to consider when setting a minimum instances value:

  • IfCloud Functions for Firebase scales your app above your setting,you'll experience a cold start for each instance above that threshold.
  • Cold starts have the most severe effect on apps with spiky traffic. If yourapp has spiky traffic and you set a value high enough thatcold starts are reduced on each traffic increase, you'll see significantlyreduced latency. For apps with constant traffic, cold starts are not likelyto severely affect performance.
  • Setting minimum instances can make sense for production environments, butshould usually be avoided in testing environments. To scale to zero in yourtest project but still reduce cold starts in your production project, youcan set a minimum instances value in your parameterized configuration:

    Node.js

    const{onRequest}=require('firebase-functions/https');const{defineInt,defineString}=require('firebase-functions/params');// Define some parametersconstminInstancesConfig=defineInt('HELLO_WORLD_MININSTANCES');constwelcomeMessage=defineString('WELCOME_MESSAGE');// To use configured parameters inside the config for a function, provide them// directly. To use them at runtime, call .value() on them.exportconsthelloWorld=onRequest({minInstances:minInstancesConfig},(req,res)=>{res.send(`${welcomeMessage.value()}! I am a function.`);});

    Python

    MIN_INSTANCES=params.IntParam("HELLO_WORLD_MININSTANCES")WELCOME_MESSAGE=params.StringParam("WELCOME_MESSAGE")@https_fn.on_request(min_instances=MIN_INSTANCES.value())defget_autocomplete_response(event:https_fn.Request)->https_fn.Response:returnhttps_fn.Response(f"{WELCOME_MESSAGE.value()} I'm a function.")

Limit the maximum number of instances for a function

You can set a value for maximum instances in function source code.For example, this function sets a limit of 100 instances in order to notoverwhelm a hypothetical legacy database:

Node.js

const{onMessagePublished}=require("firebase-functions/v2/pubsub");exports.mirrorevents=onMessagePublished({topic:"topic-name",maxInstances:100},(event)=>{// Connect to legacy database});

Python

@pubsub_fn.on_message_published(topic="topic-name",max_instances=100)defmirrorevents(event:pubsub_fn.CloudEvent):#  Connect to legacy database
Note: for more control over invocation rates and throttling, considertask queue functions.

If an HTTP function is scaled up to the maximum instances limit, new requests arequeued for 30 seconds and then rejected with a response code of429 Too Many Requests if no instance is available by then.

To learn more about best practices for using maximum instances settings, checkout thesebest practices for setting maximum instances.

Set a service account

The default service accounts for functions have a broad set of permissions toallow you to interact with other Firebase and Google Cloud services:

  • 2nd gen functions:PROJECT_NUMBER-compute@developer.gserviceaccount.com(namedCompute Engine default service account)
  • 1st gen functions:PROJECT_ID@appspot.gserviceaccount.com(namedApp Engine default service account)

You might want to override the default service account and limit a function tothe exact resources needed. You can do this by creating a custom service accountand assigning it to the appropriate function using theserviceAccountconfiguration value:

const{onRequest}=require("firebase-functions/https");exports.helloWorld=onRequest({// This function doesn't access other Firebase project resources, so it uses a limited service account.serviceAccount:"my-limited-access-sa@",// or prefer the full form: "my-limited-access-sa@my-project.iam.gserviceaccount.com"},(request,response)=>{response.send("Hello from Firebase!");},);

If you want to set the same service account for all of your functions, you cando that with thesetGlobalOptionsfunction.

Set timeout and memory allocation

In some cases, your functions may have special requirements for a long timeoutvalue or a large allocation of memory. You can set these values either in theGoogle Cloud console or in the function source code (Firebase only), usingtimeout values within these maximum duration limits:

  • HTTP and callable functions: 3600 seconds (60 minutes)
  • Scheduled/Task queue functions: 1800 seconds (30 minutes)
  • Other event-driven functions: 540 seconds (9 minutes)

To set memory allocation and timeout in functions source code, usethe global options for memory and timeout seconds to customize the virtual machine running your functions. For example, thisCloud Storage function uses 1GiB of memory and timesout after 300 seconds:

Node.js

exports.convertLargeFile=onObjectFinalized({timeoutSeconds:300,memory:"1GiB",},(event)=>{// Do some complicated things that take a lot of memory and time});

Python

@storage_fn.on_object_finalized(timeout_sec=300,memory=options.MemoryOption.GB_1)defconvert_large_file(event:storage_fn.CloudEvent):# Do some complicated things that take a lot of memory and time.

To set memory allocation and timeout in theGoogle Cloud console:

  1. In theGoogle Cloud console selectCloud Functions for Firebase from theleft menu.
  2. Select a function by clicking on its name in the functions list.
  3. Click theEdit icon in the top menu.
  4. Select a memory allocation from the drop-down menu labeledMemory allocated.
  5. ClickMore to display the advanced options, and enter a number ofseconds in theTimeout text box.
  6. ClickSave to update the function.

Override CPU defaults

Up to 2GB of memory allocated, each function inCloud Functions for Firebase (2nd gen)defaults to one CPU, and then increases to 2 CPU for 4 and 8GB. Note thatthis is significantly different from 1st gen default behavior in ways that couldlead to slightly higher costs for low-memory functions as expressed in thefollowing table:

RAM allocatedVersion 1 default CPU (fractional)Version 2 default CPUPrice increase per ms
128MB1/12110.5x
256MB1/615.3x
512MB1/312.7x
1GB7/1211.6x
2GB111x
4GB221x
8GB221x
16 GBn/a4n/a

If you prefer 1st gen behavior for your 2nd gen functions, set 1st gen defaultsas a global option:

Node.js

// Turn off Firebase defaultssetGlobalOptions({cpu:'gcf_gen1'});

Python

# Use 1st gen behaviorset_global_options(cpu="gcf_gen1")

For CPU-intensive functions, 2nd gen provides the flexibility to configure additionalCPU. You can boost CPU on a per-function basis as shown:

Node.js

// Boost CPU in a function:exportconstanalyzeImage=onObjectFinalized({cpu:2},(event)=>{// computer vision goes here});

Python

# Boost CPU in a function:@storage_fn.on_object_finalized(cpu=2)defanalyze_image(event:storage_fn.CloudEvent):# computer vision goes here

Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.

Last updated 2026-02-20 UTC.