Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alloy traces otlp configuration issue #2874

Open
avramenkovladyslav opened this issue Feb 28, 2025 · 0 comments
Open

Alloy traces otlp configuration issue #2874

avramenkovladyslav opened this issue Feb 28, 2025 · 0 comments

Comments

@avramenkovladyslav
Copy link

avramenkovladyslav commented Feb 28, 2025

In alloy configuration we have this code:

otelcol.receiver.otlp "grpc" {
  grpc {}

  output {
    traces  = [otelcol.processor.tail_sampling.policies.input]
  }
}

otelcol.processor.tail_sampling "policies" {
    decision_wait = "30s"
    policy {
        name = "latency_policy"
        type = "latency"
        latency {
            threshold_ms = 150
        }
    }

    policy {
        name = "probabilistic_policy"
        type = "probabilistic"
        probabilistic {
            sampling_percentage = 20.0
        }
    }

    output {
        traces = [otelcol.exporter.otlp.tempo.input]
    }
}

otelcol.exporter.otlp "tempo" {
  client {
    endpoint = "http://tempo-distributor.monitoring.svc.cluster.local:4317"
    tls {
      insecure_skip_verify = true
      insecure = true
    }
  }
}

But when traces are delivered to Tempo, we have a strange behaviour querying them. Sometimes they are have status <root span not yet received>

Image

We tried configurations with otelcol.processor.batch and another value of decision_wait for otelcol.processor.tail_sampling, but had no success.
Is there anything else about traces that we can add for stability?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant