You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What is missing?
If we have issues in the running pod which are visible from the Cassandra logs, we could create events directly from the SystemLogger which we deploy as a sidecar to every Cassandra pod. This ensures that the log processing will scale no matter how many nodes the cluster has (assuming low amount of events are generated).
We could still output the logs to the stdin also, with possibility of more features for filtering. However, I don't think we should necessary implement these features ourselves (the parsing language, tailing etc), instead we could use a logshipper such as Vector (but not necessarily Vector) and then modify the rules to be available in a ConfigMap if user wishes to further use its capabilities in their log shipping requirements.
We probably need to create the Kubernetes events sink ourselves as not many products have that. Or just to create the events, we can add a webhook to cass-operator and call it with the service name - however this then requires that cass-operator does not become the bottleneck in large clusters and that it is always available (or we need to queue on the sending side + throttling etc).
What is missing?
If we have issues in the running pod which are visible from the Cassandra logs, we could create events directly from the SystemLogger which we deploy as a sidecar to every Cassandra pod. This ensures that the log processing will scale no matter how many nodes the cluster has (assuming low amount of events are generated).
We could still output the logs to the stdin also, with possibility of more features for filtering. However, I don't think we should necessary implement these features ourselves (the parsing language, tailing etc), instead we could use a logshipper such as Vector (but not necessarily Vector) and then modify the rules to be available in a ConfigMap if user wishes to further use its capabilities in their log shipping requirements.
We probably need to create the Kubernetes events sink ourselves as not many products have that. Or just to create the events, we can add a webhook to cass-operator and call it with the service name - however this then requires that cass-operator does not become the bottleneck in large clusters and that it is always available (or we need to queue on the sending side + throttling etc).
k8ssandra/management-api-for-apache-cassandra#193
Why do we need it?
Environment
Cass Operator version:
**Anything else we need to know?**:Insert image tag or Git SHA here
┆Issue is synchronized with this Jira Story by Unito
┆Issue Number: CASS-42
The text was updated successfully, but these errors were encountered: