- Notifications
You must be signed in to change notification settings - Fork1.8k
chore: automatically move projects to secondary ingestion on S3 rate-limits#10691
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:main
Are you sure you want to change the base?
Uh oh!
There was an error while loading.Please reload this page.
Conversation
| .default("true"), | ||
| LANGFUSE_S3_LIST_MAX_KEYS:z.coerce.number().positive().default(200), | ||
| LANGFUSE_S3_SLOWDOWN_TTL_SECONDS:z.coerce.number().positive().default(14400),// 4 hours |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Usually AWS is faster with scaling. I would move this to ~1h
| `Redirecting ingestion event to secondary queue for project${job.data.payload.authCheck.scope.projectId}`, | ||
| `Redirecting ingestion event to secondary queue for project${projectId}`, | ||
| { | ||
| reason:shouldRedirectSlowdown ?"s3_slowdown_flag" :"env_config", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
We need to make sure to scale containers on the secondary queue length also. otherwise we may move a large new ingestor and do not scale containers to clear the queues
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
We also need to check alerting on that queue
Uh oh!
There was an error while loading.Please reload this page.
Important
Automatically redirects projects to a secondary ingestion queue on S3 SlowDown errors, with Redis tracking and environment configuration updates.
processEventBatch()inprocessEventBatch.tsandingestionQueueProcessorBuilder()iningestionQueue.ts.markProjectS3Slowdown()ins3SlowdownTracking.ts.LANGFUSE_S3_SLOWDOWN_TTL_SECONDStoenv.tswith a default of 14400 seconds (4 hours).isS3SlowDownError(),markProjectS3Slowdown(), andhasS3SlowdownFlag()ins3SlowdownTracking.tsfor handling S3 SlowDown errors and tracking in Redis.s3SlowdownTrackinginindex.ts.This description was created by
for070df53. You cancustomize this summary. It will automatically update as commits are pushed.