Replies: 3 comments
-
Great question! Ultimately it really depends on your application, how many distinct stacks there are, how many and how long function names are, and so on, so this may be perfectly ok. That said, I have a few recommendations:
To keep only memory in-use bytes use this configuration on your scrape config:
And last but not least, if possible (and we understand it's not necessarily always possible), use Parca Agent for CPU profiling, and turn off the in-process profilers from scraping using:
In addition to that, you can tweak the active in-memory size allowed by the storage using the |
Beta Was this translation helpful? Give feedback.
-
Thank you for pieces of advice. I applied the first and the second.
About CPU profiles, I tried to enable the agent, but it consumed some resources as well. Is there an option to collect only profiles of specific services with the agent? |
Beta Was this translation helpful? Give feedback.
-
My desire is to have some predictable resource consumption. Using limits in Kubernetes is probably acceptable for this case, yet still, I have a feeling that there is a more resilient way to handle the problem. |
Beta Was this translation helpful? Give feedback.
-
Hello, team! I have a question.
After completing the Kubernetes deployment guide, I now have an installation of Parca. For now, there are no agents, only two profile scrape sources configured manually.
Yet there is a problem that the memory consumption is growing infinitely until the pod is OOMkilled.
https://pprof.me/4aae8e1/
So my question is: is everything OK? Do I need to give Parca more resources? Maybe I missed something.
Beta Was this translation helpful? Give feedback.
All reactions