- Notifications
You must be signed in to change notification settings - Fork2
pydantic/logfire-helm-chart
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
Helm chart for self-hosted Pydantic Logfire
$ helm repo add pydantic https://charts.pydantic.dev/$ helm upgrade --install logfire pydantic/logfire
There are a number of Pydantic Logfire external prerequisites including PostgreSQL, Dex and Object Storage.
You will require image pull secrets to pull down the docker images from our private repository. Get in contact with us to get a copy of them.
When you have thekey.json
file you can load it in as a secret like so:
kubectl create secret docker-registry logfire-image-key \ --docker-server=us-docker.pkg.dev \ --docker-username=_json_key \ --docker-password="$(cat key.json)" \ --docker-email=YOUR-EMAIL@example.com
Then you can either configure yourservice account to use them or specify this invalues.yaml
underimagePullSecrets
:
imagePullSecrets: -logfire-image-key
There is a hostname that is required to be set: I.e,logfire.example.com
. Set via theingress.hostname
value.
We have an ingress configuration that will allow you to set up ingress:
ingress:enabled:truetls:truehostname:logfire.example.comingressClassName:nginx
We expose a service calledlogfire-service
which will route traffic appropriately.
If you don't want to use the ingress controller, you will still need to define hostnames and whether you are externally using TLS:
I.e, this config will turn off the ingress resource, but still set appropriate cors headers for thelogfire-service
:
ingress:# this turns off the ingress resourceenabled:false# used to ensure appropriate CORS headers are set. If your browser is accessing it on https, then needs to be enabled heretls:true# used to ensure appropriate CORS headers are set.hostname:logfire.example.com
If you arenot using kubernetes ingress, you must still set the hostnames under theingress
configuration.
Dex is used as the identity service for logfire & can be configured for many different types of connectors. The full list of connectors can be found here:https://dexidp.io/docs/connectors/
There is some default configuration provided invalues.yaml
.
Depending on whatconnector you want to use, you can configure dex connectors accordingly.
Here's an example usinggithub
as a connector:
logfire-dex:...config:connectors: -type:"github"id:"github"name:"GitHub"config:# You get clientID and clientSecret by creating a GitHub OAuth App# See https://docs.github.com/en/apps/oauth-apps/building-oauth-apps/creating-an-oauth-appclientID:client_idclientSecret:client_secretgetUserInfo:true
To use GitHub as an example, you can find general instructions for creating an OAuth appin the GitHub docs.It should look something like this:
Dex allows configuration parameters to reference environment variables.This can be done by using the$
symbol. For example, theclientID
andclientSecret
can be set as environment variables:
logfire-dex:env: -name:GITHUB_CLIENT_IDvalueFrom:secretKeyRef:name:my-github-secretkey:client-id -name:GITHUB_CLIENT_SECRETvalueFrom:secretKeyRef:name:my-github-secretkey:client-secretconfig:connectors: -type:"github"id:"github"name:"GitHub"config:clientID:$GITHUB_CLIENT_IDclientSecret:$GITHUB_CLIENT_SECRETgetUserInfo:true
You would have to manually (or via IaC, etc.) createmy-github-secret
.This allows you to avoid putting any secrets into avalues.yaml
file.
Remember to add the image pull secrets to dex's service accountlogfire-dex
if you're not usingimagePullSecrets
.
We recommend you set secrets as Kubernetes secrets and reference them in thevalues.yaml
file instead of hardcoding secrets which is more likely to be exposed and harder to rotate.
Pydantic Logfire requires Object Storage to store data. There are a number of different integrations that can be used:
- Amazon S3
- Google Cloud Storage
- Azure Storage
Each has their own set of environment variables that can be used to configure them. However if your kubernetes service account has the appropriate credentials, that be used by settingserviceAccountName
.
Variables extracted from environment:
AWS_ACCESS_KEY_ID
-> access_key_idAWS_SECRET_ACCESS_KEY
-> secret_access_keyAWS_DEFAULT_REGION
-> regionAWS_ENDPOINT
-> endpointAWS_SESSION_TOKEN
-> tokenAWS_CONTAINER_CREDENTIALS_RELATIVE_URI
->https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.htmlAWS_ALLOW_HTTP
-> set to "true" to permit HTTP connections without TLS
Example:
objectStore:uri:s3://<bucket_name># Note: not needed if the service account specified by `serviceAccountName` itself has credentialsenv:AWS_DEFAULT_REGION:<region>AWS_SECRET_ACCESS_KEY:valueFrom:secretKeyRef:name:my-aws-secretkey:secret-keyAWS_ACCESS_KEY_ID:<access_key>
Variables extracted from environment:
GOOGLE_SERVICE_ACCOUNT
: location of service account fileGOOGLE_SERVICE_ACCOUNT_PATH
: (alias) location of service account fileSERVICE_ACCOUNT
: (alias) location of service account fileGOOGLE_SERVICE_ACCOUNT_KEY
: JSON serialized service account keyGOOGLE_BUCKET
: bucket nameGOOGLE_BUCKET_NAME
: (alias) bucket name
Example:
objectStore:uri:gs://<bucket># Note: not needed if the service account specified by `serviceAccountName` itself has credentialsenv:GOOGLE_SERVICE_ACCOUNT_PATH:/path/to/service/account
Variables extracted from environment:
AZURE_STORAGE_ACCOUNT_NAME
: storage account nameAZURE_STORAGE_ACCOUNT_KEY
: storage account master keyAZURE_STORAGE_ACCESS_KEY
: alias for AZURE_STORAGE_ACCOUNT_KEYAZURE_STORAGE_CLIENT_ID
: client id for service principal authorizationAZURE_STORAGE_CLIENT_SECRET
: client secret for service principal authorizationAZURE_STORAGE_TENANT_ID
: tenant id used in oauth flows
Example:
objectStore:uri:az://<container_name>env:AZURE_STORAGE_ACCOUNT_NAME:<storage_account_name>AZURE_STORAGE_ACCOUNT_KEY:valueFrom:secretKeyRef:name:my-azure-secretkey:account-key
Pydantic Logfire nominally needs 3 separate PostgreSQL databases:crud
,ff
, anddex
. Each will need a user with owner permissions to allow migrations to run.While they can all be ran on the same instance, they are required to be separate databases to prevent naming/schema collisions.
Here's an example set of values usingpostgres.example.com
as the host:
postgresDsn:postgres://postgres:postgres@postgres.example.com:5432/crudpostgresFFDsn:postgres://postgres:postgres@postgres.example.com:5432/ffdex:...# note that the dex chart does not use the uri style connectorconfig:storage:type:postgresconfig:host:postgres.example.comport:5432user:postgresdatabase:dexpassword:postgresssl:mode:disable
Pydantic Logfire uses SMTP to send emails. You will need to configure email using the following values:
smtp:host:smtp.example.comport:25username:userpassword:passuse_tls:false
Pydantic Logfire AI features can be enabled by setting theai
configuration invalues.yaml
.You need to specify the model provider and model name you want to use:
ai:model:provider:model-nameopenAi:apiKey:openai-api-keyvertexAi:region:region# Optional, only needed for Vertex AI if not using default regionazureOpenAi:endpoint:azure-openai-endpointapiKey:azure-openai-api-keyapiVersion:azure-openai-api-version
A number of components within logfire allow containers/pods to be horizontally scaled. Also: depending on your setup you may want a number of replicas to run to ensure redundancy if a node fails.
Each service has both resources and autoscaling configured in the same way:
<service_name>:# -- Number of pod replicasreplicas:1# -- Resource limits and allocationsresources:cpu:"1"memory:"1Gi"# -- Autoscaler settingsautoscaling:minReplicas:2maxReplicas:4memAverage:65cpuAverage:20
Seevalues.yaml
for some production level values
Since this is self-hosted you will need to update your logfire configuration to include a different URL to send data to. You can do this by specifying thebase_url
in advanced config:
importlogfirelogfire.configure(token='<your_logfire_token>',advanced=logfire.AdvancedOptions(base_url="https://logfire.example.com"))logfire.info('Hello, {place}!',place='World')
There are various development options you can set to test out the helm chart. We have two flavours:values.docker-desktop.yaml
andvalues.k3s.yaml
. Both of which are intended for development of the helm chart only and should not be considered production ready.
You can run up a dev instance of PostgreSQL within the chart if you are just starting out. This deployment will take care of creating all the databases needed
Put the following values in yourvalues.yaml
file:
# To enable deployment of internal PostgreSQLdev:deployPostgres:truepostgresDsn:postgres://postgres:postgres@logfire-postgres:5432/crudpostgresFFDsn:postgres://postgres:postgres@logfire-postgres:5432/ffdex:...config:storage:type:postgresconfig:host:logfire-postgresport:5432user:postgresdatabase:dexpassword:postgresssl:mode:disable
You can runmaildev
which will allow you to send/receive emails without an external SMTP server. Add the following to yourvalues.yaml
:
ingress:...maildevHostname:maildev.example.comdev:...deployMaildev:true
By default we bundle a single-nodeMinIO instance to allow you to test out object storage.This is not intended for production use, but is useful for development.
Helm chart for self-hosted Pydantic Logfire
Repository | Name | Version |
---|---|---|
https://charts.bitnami.com/bitnami | minio | 17.0.9 |
https://charts.bitnami.com/bitnami | postgresql | 16.7.15 |
Key | Type | Default | Description |
---|---|---|---|
ai.azureOpenAi.apiKey | string | nil | The Azure OpenAI API key |
ai.azureOpenAi.apiVersion | string | nil | The Azure OpenAI API version |
ai.azureOpenAi.endpoint | string | nil | The Azure OpenAI endpoint |
ai.model | string | nil | The AI provide and model to use. Prefix with the provider. I.e, For azure useazure:gpt-4o Seehttps://ai.pydantic.dev/models/ for more info |
ai.openAi.apiKey | string | nil | The OpenAI API key |
ai.vertexAi.region | string | nil | The region for Vertex AI |
dev | object | {"deployMaildev":false,"deployMinio":false,"deployPostgres":false} | Development mode settings |
dev.deployMaildev | bool | false | Deploy maildev for testing emails |
dev.deployMinio | bool | false | Do NOT use this in production! |
dev.deployPostgres | bool | false | Do NOT use this in production! |
existingSecret | object | {"annotations":{},"enabled":false,"name":""} | Existing Secret with the following keys logfire-dex-client-secret logfire-meta-write-token logfire-meta-frontend-token logfire-jwt-secret |
existingSecret.annotations | object | {} | Optional annotations for the secret, e.g., for external secret managers. |
existingSecret.enabled | bool | false | Set to true to use an existing secret. Highly recommended for Argo CD users. |
existingSecret.name | string | "" | The name of the Kubernetes Secret resource. |
hooksAnnotations | string | nil | Custom annotations for migration Jobs |
image.pullPolicy | string | "IfNotPresent" | The pull policy for docker images |
imagePullSecrets | list | [] | The secret used to pull down container images for pods |
ingress.annotations | object | {} | Any annotations required. |
ingress.enabled | bool | false | Enable Ingress Resource. If you're not using an ingress resource, you still need to configuretls ,hostname |
ingress.hostname | string | "logfire.example.com" | The hostname used for Pydantic Logfire |
ingress.ingressClassName | string | "nginx" | |
ingress.tls | bool | false | Enable TLS/HTTPS connections. Required for CORS headers |
logfire-dex | object | {"annotations":{},"config":{"connectors":[],"storage":{"config":{"database":"dex","host":"logfire-postgres","password":"postgres","port":5432,"ssl":{"mode":"disable"},"user":"postgres"},"type":"postgres"}},"podAnnotations":{},"replicas":1,"resources":{"cpu":"1","memory":"1Gi"},"service":{"annotations":{}}} | Configuration, autoscaling & resources forlogfire-dex deployment |
logfire-dex.annotations | object | {} | Workload annotations |
logfire-dex.config | object | {"connectors":[],"storage":{"config":{"database":"dex","host":"logfire-postgres","password":"postgres","port":5432,"ssl":{"mode":"disable"},"user":"postgres"},"type":"postgres"}} | Dex Config |
logfire-dex.config.connectors | list | [] | Dex auth connectors, seehttps://dexidp.io/docs/connectors/ redirectURI config option can be omitted, as it will be automatically generated however if specified, the custom value will be honored |
logfire-dex.config.storage | object | {"config":{"database":"dex","host":"logfire-postgres","password":"postgres","port":5432,"ssl":{"mode":"disable"},"user":"postgres"},"type":"postgres"} | Dex storage configuration, seehttps://dexidp.io/docs/configuration/storage/ |
logfire-dex.podAnnotations | object | {} | Pod annotations |
logfire-dex.replicas | int | 1 | Number of replicas |
logfire-dex.resources | object | {"cpu":"1","memory":"1Gi"} | resources |
logfire-dex.service.annotations | object | {} | Service annotations |
logfire-ff-ingest.annotations | object | {} | Workload annotations |
logfire-ff-ingest.podAnnotations | object | {} | Pod annotations |
logfire-ff-ingest.service.annotations | object | {} | Service annotations |
logfire-ff-ingest.volumeClaimTemplates | object | {"storage":"16Gi"} | Configuration for the PersistentVolumeClaim template for the stateful set. |
logfire-ff-ingest.volumeClaimTemplates.storage | string | "16Gi" | The amount of storage to provision for each pod. |
logfire-redis.enabled | bool | true | Enable redis as part of this helm chart. Disable this if you want to provide your own redis instance. |
logfire-redis.image | object | {"pullPolicy":"IfNotPresent","repository":"redis","tag":"7.2"} | Redis image configuration |
logfire-redis.image.pullPolicy | string | "IfNotPresent" | Redis image pull policy |
logfire-redis.image.repository | string | "redis" | Redis image repository |
logfire-redis.image.tag | string | "7.2" | Redis image tag |
minio.args[0] | string | "server" | |
minio.args[1] | string | "/data" | |
minio.auth.rootPassword | string | "logfire-minio" | |
minio.auth.rootUser | string | "logfire-minio" | |
minio.command[0] | string | "minio" | |
minio.fullnameOverride | string | "logfire-minio" | |
minio.lifecycleHooks.postStart.exec.command[0] | string | "sh" | |
minio.lifecycleHooks.postStart.exec.command[1] | string | "-c" | |
minio.lifecycleHooks.postStart.exec.command[2] | string | "# Wait for the server to start\nsleep 5\n# Create a bucket\nmc alias set local http://localhost:9000 logfire-minio logfire-minio\nmc mb local/logfire\nmc anonymous set public local/logfire\n" | |
minio.persistence.mountPath | string | "/data" | |
minio.persistence.size | string | "32Gi" | |
objectStore | object | {"env":{},"uri":null} | Object storage details |
objectStore.env | object | {} | additional env vars for the object store connection |
objectStore.uri | string | nil | Uri for object storage i.e,s3://bucket |
otel_collector | object | {"prometheus":{"add_metric_suffixes":false,"enable_open_metrics":true,"enabled":false,"endpoint":"0.0.0.0","metric_expiration":"180m","port":9090,"resource_to_telemetry_conversion":{"enabled":true},"send_timestamp":true}} | Config for otel-collector |
podSecurityContext | object | {} | Podsecurity context. See theAPI reference for details. |
postgresDsn | string | "postgresql://postgres:postgres@logfire-postgres:5432/crud" | Postgres DSN used forcrud database |
postgresFFDsn | string | "postgresql://postgres:postgres@logfire-postgres:5432/ff" | Postgres DSN used forff database |
postgresSecret | object | {"annotations":{},"enabled":false,"name":""} | User provided postgres credentials containingpostgresDsn andpostgresFFDsn keys |
postgresSecret.annotations | object | {} | Optional annotations for the secret, e.g., for external secret managers. |
postgresSecret.enabled | bool | false | Set to true to use an existing secret. Highly recommended for Argo CD users. |
postgresSecret.name | string | "" | The name of the Kubernetes Secret resource. |
postgresql.auth.postgresPassword | string | "postgres" | |
postgresql.fullnameOverride | string | "logfire-postgres" | |
postgresql.postgresqlDataDir | string | "/var/lib/postgresql/data/pgdata" | |
postgresql.primary.initdb.scripts."create_databases.sql" | string | "CREATE DATABASE crud;\nCREATE DATABASE dex;\nCREATE DATABASE ff;\n" | |
postgresql.primary.persistence.mountPath | string | "/var/lib/postgresql" | |
postgresql.primary.persistence.size | string | "10Gi" | |
priorityClassName | string | "" | Specify a priority class name to setpod priority. |
redisDsn | string | "redis://logfire-redis:6379" | The DSN for redis. Change from default if you have an external redis instance |
revisionHistoryLimit | int | 2 | Define thecount of deployment revisions to be kept. May be set to 0 in case of GitOps deployment approach. |
securityContext | object | {} | Containersecurity context. See theAPI reference for details. |
serviceAccountName | string | "default" | the Kubernetes Service Account that is used by the pods |
smtp.host | string | nil | Hostname of the SMTP server |
smtp.password | string | nil | SMTP password |
smtp.port | int | 25 | Port of the SMTP server |
smtp.use_tls | bool | false | Whether to use TLS |
smtp.username | string | nil | SMTP username |
Autogenerated from chart metadata usinghelm-docs v1.14.2
About
Helm Chart for running Logfire 🪵️🔥
Resources
Security policy
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors10
Uh oh!
There was an error while loading.Please reload this page.