This message was deleted.
# general
s
This message was deleted.
t
We had a need that required scaling on a specific custom Prom metric which was exposed by the Pod(API) v1.20 of K8s supported it in a buggy manner but as of v1.22 we were all good
r
Yes, we use 'oldest message in queue' or 'queue latency' to scale background processing workers. It works really well, and lets us scale down to zero for things that don't run often.