You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been benchmarking the Kubernetes events receiver in OpenTelemetry with a load of 100 events per second. Initially, the memory usage is around 70-80 MB when the event generator starts. However, as the generator continues, the memory usage gradually increases over the first hour, stabilizing at around 1.7 GB.
When I stop the event generator, the memory usage decreases gradually over the next hour, stabilizing at 600-700 MB. It remains stable at this level for the following 10 hours, without returning to the initial 70-80 MB when no events are being generated.
I used pprof to analyze the memory usage, and it seems that the majority of the memory is being utilized by cache, JSON processing, and reflect.
Does anyone have insights into why the memory usage doesn't return to the initial levels when there are no events being processed? Could this be to caching mechanisms or memory allocation Telemetry?
The text was updated successfully, but these errors were encountered:
Radhe1002
changed the title
Kubernetes Events Receiver Benchmark Test : Memory Behaviour
[receiver/k8s_events] Kubernetes Events Receiver Benchmark Test : Memory Behaviour
Jan 15, 2025
Radhe1002
changed the title
[receiver/k8s_events] Kubernetes Events Receiver Benchmark Test : Memory Behaviour
[receiver/k8s_events] k8s_events Receiver Benchmark Test : Memory Behaviour
Jan 15, 2025
Radhe1002
changed the title
[receiver/k8s_events] k8s_events Receiver Benchmark Test : Memory Behaviour
[receiver/k8s_events] k8sevents Receiver Benchmark Test : Memory Behaviour
Jan 15, 2025
Pinging code owners for receiver/k8sevents: @dmitryax@TylerHelmuth@ChrsMark. See Adding Labels via Comments if you do not have permissions to add labels yourself. For example, comment '/label priority:p2 -needs-triaged' to set the priority and remove the needs-triaged label.
Component(s)
Opentelemetry collector:
k8s_events receiver
Describe the issue you're reporting
I've been benchmarking the Kubernetes events receiver in OpenTelemetry with a load of 100 events per second. Initially, the memory usage is around 70-80 MB when the event generator starts. However, as the generator continues, the memory usage gradually increases over the first hour, stabilizing at around 1.7 GB.
When I stop the event generator, the memory usage decreases gradually over the next hour, stabilizing at 600-700 MB. It remains stable at this level for the following 10 hours, without returning to the initial 70-80 MB when no events are being generated.
I used pprof to analyze the memory usage, and it seems that the majority of the memory is being utilized by cache, JSON processing, and reflect.
Does anyone have insights into why the memory usage doesn't return to the initial levels when there are no events being processed? Could this be to caching mechanisms or memory allocation Telemetry?
The text was updated successfully, but these errors were encountered: