Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

feat: implement observability of notifications subsystem#13799

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Merged
dannykopping merged 27 commits intomainfromdk/system-notifications-o11y
Jul 11, 2024

Conversation

dannykopping
Copy link
Contributor

@dannykoppingdannykopping commentedJul 8, 2024
edited
Loading

Closescoder/internal#3

Also performed some drive-by refactorings.

NOTE: this is stacked on `dk/system-notifications-lib, but I didn't want to use Graphite for fear it screws up that branch which is close to being merged; I'm still pretty new to that tool.

@dannykoppingdannykoppingforce-pushed thedk/system-notifications-o11y branch 3 times, most recently fromad7f470 to18dbabeCompareJuly 8, 2024 08:47
@dannykoppingdannykopping requested review frommtojek,matifali,mafredri andspikecurtis and removed request formatifaliJuly 8, 2024 08:55
@dannykoppingdannykopping marked this pull request as ready for reviewJuly 8, 2024 08:55
@dannykoppingdannykopping marked this pull request as draftJuly 8, 2024 09:03
Copy link
Member

@mafredrimafredri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

FWIW the changes LGTM in isolation (I haven't reviewed the original code).

@@ -0,0 +1,2 @@
ALTER TABLE notification_messages
ADD COLUMN queued_seconds FLOAT NULL;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Nit: The namequeued_seconds implies an integer, consider changing the type or the name.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I don't think it necessarily implies integer;time.Duration'sSeconds() returns a float64.

Copy link
Member

@mafredrimafredriJul 8, 2024
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Fair enough 😄.

In the DB, usually it means seconds:

    lifetime_seconds bigint DEFAULT 86400 NOT NULL,    timeout_seconds integer NOT NULL    connection_timeout_seconds integer DEFAULT 0 NOT NULL,

But fine to keep as float if we care about the precision (although I'd prefer to multiply the number to required precision in int).

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I suppose I don't strictlyneed to keep sub-second precision on the length of time messages were queued for.
I'll switch tointeger 👍

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

This is backing a Prometheus metric, which are always floats, and by convention end in the SI unit, which for time is seconds. It implies no precision.

This should be a float: there just isn't any compelling reason to pre-judge the required precision. This is not a banking application, nor do we need to worry about the relative speed of integer vs floating point arithmetic.

The namequeued_seconds is appropriate.

dannykopping reacted with thumbs up emoji
Copy link
Member

@mtojekmtojek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Nothing blocking 👍

Copy link
Member

@mtojekmtojek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Thanks for all replies.

Base automatically changed fromdk/system-notifications-lib tomainJuly 8, 2024 13:38
dannykoppingand others added8 commitsJuly 8, 2024 15:54
Minor refactoring to make testing easierSigned-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Co-authored-by: Mathias Fredriksson <mafredri@gmail.com>
Signed-off-by: Danny Kopping <danny@coder.com>
@dannykoppingdannykoppingforce-pushed thedk/system-notifications-o11y branch from0e6fdae to114797dCompareJuly 8, 2024 13:55
@dannykoppingdannykopping marked this pull request as ready for reviewJuly 8, 2024 13:55
subsystem = "notifications"

LabelMethod = "method"
LabelTemplateID = "template_id"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

maybe "notification_template_id" --- there is another, more user-facing concept of "template" which people might assume this refers to.

dannykopping reacted with thumbs up emoji
PermFailureCount: promauto.With(reg).NewCounterVec(prometheus.CounterOpts{
Name: "permanent_failures_count", Namespace: ns, Subsystem: subsystem,
Help: "The count of notifications which failed and have exceeded their retry attempts.",
}, []string{LabelMethod, LabelTemplateID}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

DispatchCount, TempFailureCount, and PermFailureCount might make sense as a single metric with different "disposition" (or "result") labels. The help text refers to them as "notifications", but I think that's misleading. What we are counting is "notification attempts" or "dispatch attempts", which might individually end up in success or some kind of failure. In particular, in the case of failure and retry, the number of delivery attempts can exceed the number of notifications, so it's important to make the distinction.

Prometheus guidelines say that something should be a single metric with labels (vs multiple metrics) when the sum or average of the label is sensible and coherent. Filtering by method and summing across disposition would be useful, for example, to track the number of web requests we are making via Webhook delivery.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Super idea! Simplifies things a lot.

PendingUpdates: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
Name: "pending_updates", Namespace: ns, Subsystem: subsystem,
Help: "The number of updates waiting to be flushed to the store.",
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

A really useful Gauge is the number of "in progress" delivery attempts. You increment it when you kick off the delivery goroutine and decrement it when you send tosuccess orfailure channels. Can help spot hung handlers/leaked goroutines

dannykopping reacted with thumbs up emoji
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Good idea!

func NewMetrics(reg prometheus.Registerer) *Metrics {
return &Metrics{
DispatchedCount: promauto.With(reg).NewCounterVec(prometheus.CounterOpts{
Name: "dispatched_count", Namespace: ns, Subsystem: subsystem,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

accumulating counts should end in the units andtotal as incoderd_notifications_successful_deliveries_total, where "deliveries" is the unit (could use "dispatch_attempts", or something similar).

https://prometheus.io/docs/practices/naming/

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Wow, 4 months out of Grafana Labs and I forget all my Prometheus manners.
Thanks!

// Currently no requirement to discriminate between success and failure updates which are pending.
PendingUpdates: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
Name: "pending_updates", Namespace: ns, Subsystem: subsystem,
Help: "The number of updates waiting to be flushed to the store.",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Sounds too generic to me. "The number of delivery attempt results waiting to be flushed to the store" ?

dannykopping reacted with thumbs up emoji
@@ -0,0 +1,2 @@
ALTER TABLE notification_messages
ADD COLUMN queued_seconds INTEGER NULL;
Copy link
Contributor

@spikecurtisspikecurtisJul 8, 2024
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Do we need to store this on the row? It feels a little strange and ambiguous. If the notification_message is currently queued, then it's a bit meaningless, since we don't know for how much longer it will be queued. I.e. it is at present being returned on Enqueue, but is meaningless in that context.

When we dequeue it, can't the notifier compute how long it was on the queue based on it'screated_at orupdated_at time? Or, if we really want to ensure all the measurements are based on the central PostgreSQL server clock, then it can part of the result returned to the notifier without having to write it back to the database.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Yeah I agree it's a bit awkward.

The problem is thatupdated_at is modified when the message is enqueued.

-- name: AcquireNotificationMessages :manyWITH acquiredAS (UPDATE        notification_messagesSET ...                updated_at= NOW(),

I tried various approaches to storing theupdated_at value prior to its being modified here, but this approach seemed simpler.

"queued" should refer to the timebetween attempts; I can't usecreated_at for that reason. Good call on its being returned viaEnqueueNotificationMessage; that's definitely not meaningful.

I should probably set it to0 again if the message is marked as failed; that way the value will always be fairly meaningful:

  • pending: value should be 0
  • leased/sent: value should be present
  • *failure/: value should be 0

}
}
n.metrics.PendingUpdates.Set(float64(len(success) + len(failure)))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I think it would be cleaner to just compute this metric on-Collect, rather thanSeting it when something reads or writes to these channels. As an illustration of how easy it is to miss a spot, note that you have forgotten to set it inprocess() after a failed call toprepare().

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I agree although changing to aCollect-based approach isn't worth the complexity IMHO.
This metric is updated insyncUpdates every 2s (by default), so that's probably good enough™️ for now.

I was initially planning to givesyncUpdates the responsibility of updating this exclusively, but sinceStoreSyncInterval is tuneable, operators could mess with the metric collection in a way that is difficult to reason about.

I'll add the missing update, thanks.

// Aggregating on LabelTemplateID as well would cause a cardinality explosion.
QueuedSeconds: promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{
Name: "queued_seconds", Namespace: ns, Subsystem: subsystem,
Buckets: []float64{1, 5, 15, 30, 60, 120, 300, 600, 3600, 86400},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

We say this should be less than the fetch interval, which is 15s --- so I think there should be more buckets <15s. In fact, it should be more like 15/(n+1) where n is the number of Coderd replicas, assuming that notifications usually get picked up by the next fetch. Seeing the average move up toward 15s is way more important than the distinction between 10minutes, 1 hour and 1 day, all of which indicate that notifications are just blocked entirely.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Fair point.

// Aggregating on LabelTemplateID as well would cause a cardinality explosion.
DispatcherSendSeconds: promauto.With(reg).NewHistogramVec(prometheus.HistogramOpts{
Name: "dispatcher_send_seconds", Namespace: ns, Subsystem: subsystem,
Buckets: []float64{0.001, 0.05, 0.1, 0.5, 1, 2, 5, 10, 15, 30, 60, 120},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

should be0.001, 0.005, 0.01, 0.05 on the bottom end, right?

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I don't see much value in distinguishing between 1ms, 5ms, and 10ms; for most systems the latency will be higher than that (I'm assuming). The high end is what we're more concerned about here since that's indicative of a problem. In other words: this metric is more for diagnosing issues than measuring performance of the endpoint/server.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Dispatching, say, a webhook, within the same cluster could be a few milliseconds latency. Same geographic region could be 10-20ms.

Almost nothing is going to be 1ms unless it's extremely simple and extremely close, but in 50ms you could be dispatching across an entire continent. With buckets like this you could have something that normally returns in 15ms take 3x its normal time and be close to falling over and this metric would not clock it. That seems much more important than 30s, 60s, 120s.

Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

I hear you and I can see where you're coming from.

Let's wait for users to ask for this. I think there's sufficient coverage for now, and the additional datapoints are likely not going to be very useful for the majority of operators IMHO.

PendingUpdates: promauto.With(reg).NewGauge(prometheus.GaugeOpts{
Name: "pending_updates", Namespace: ns, Subsystem: subsystem,
Help: "The number of updates waiting to be flushed to the store.",
}),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Also a count of success/failure updates flushed to the database.

dannykopping reacted with thumbs up emoji
Changes made to dbgen are to avoid this panic:$ ./coderd/database/generate.shgeneratepanic: unknown return type: *dst.SelectorExprgoroutine 1 [running]:main.orderAndStubDatabaseFunctions({0x1400001ea80, 0x3c}, {0x102d019e0, 0x1}, {0x102d0bc75, 0xb}, 0x102df7b20)        /Users/danny/Code/coder/coder/scripts/dbgen/main.go:367 +0x2588main.run()        /Users/danny/Code/coder/coder/scripts/dbgen/main.go:56 +0x118main.main()        /Users/danny/Code/coder/coder/scripts/dbgen/main.go:42 +0x1cexit status 2Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
@@ -215,24 +234,55 @@ func (n *notifier) deliver(ctx context.Context, msg database.AcquireNotification
case <-ctx.Done():
logger.Warn(context.Background(), "cannot record dispatch failure result", slog.Error(ctx.Err()))
return ctx.Err()
default:
case failure <- n.newFailedDispatch(msg, err, retryable):
Copy link
ContributorAuthor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Signed-off-by: Danny Kopping <danny@coder.com>
Copy link
Member

@mtojekmtojek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

👍

Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Signed-off-by: Danny Kopping <danny@coder.com>
Copy link
Member

@mtojekmtojek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others.Learn more.

Stamp 👍

@dannykoppingdannykopping merged commitb2dab33 intomainJul 11, 2024
28 checks passed
@dannykoppingdannykopping deleted the dk/system-notifications-o11y branchJuly 11, 2024 08:57
@github-actionsgithub-actionsbot locked and limited conversation to collaboratorsJul 11, 2024
Sign up for freeto subscribe to this conversation on GitHub. Already have an account?Sign in.
Reviewers

@spikecurtisspikecurtisspikecurtis left review comments

@mafredrimafredrimafredri approved these changes

@mtojekmtojekmtojek approved these changes

Assignees

@dannykoppingdannykopping

Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

Implement observability of notifications subsystem
4 participants
@dannykopping@mafredri@spikecurtis@mtojek

[8]ページ先頭

©2009-2025 Movatter.jp