Skip to content

Conversation

@rajib76
Copy link

@rajib76 rajib76 commented Nov 30, 2025

Description

This is a fix for thr below bug:
_[BUG] OpenTelemetry token/cost metrics double-counted in Langfuse due to duplicate reporting on parent and child spans #1267

Related Issues_

#1267

Documentation PR

Type of Change

Bug fix

Testing

How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli

  • I ran hatch run prepare

Checklist

  • I have read the CONTRIBUTING document
  • I have added any necessary tests that prove my fix is effective or my feature works
  • I have updated the documentation accordingly
  • I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@fenil210-cactus
Copy link

Please merge this on priority! This is a long standing issue
@mkmeral @pgrayy @afarntrog @zastrowm @dbschmigelski

Comment on lines 669 to 672
if hasattr(response, "metrics") and hasattr(response.metrics, "accumulated_usage"):
accumulated_usage = response.metrics.accumulated_usage
attributes.update(
{
"gen_ai.usage.prompt_tokens": accumulated_usage["inputTokens"],
"gen_ai.usage.completion_tokens": accumulated_usage["outputTokens"],
"gen_ai.usage.input_tokens": accumulated_usage["inputTokens"],
"gen_ai.usage.output_tokens": accumulated_usage["outputTokens"],
"gen_ai.usage.total_tokens": accumulated_usage["totalTokens"],
"gen_ai.usage.cache_read_input_tokens": accumulated_usage.get("cacheReadInputTokens", 0),
"gen_ai.usage.cache_write_input_tokens": accumulated_usage.get("cacheWriteInputTokens", 0),
}
)
# Attributes removed to prevent double counting in OpenTelemetry backends
# Usage metrics are already reported on the child model invocation spans
pass
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have to take a closer look but if we are removing this then we should just remove the whole if condition. And no need for inline comments. This info is more for the PR I would say.

Copy link

@fenil210-cactus fenil210-cactus Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, can get rid of 'if' and remove inline comment too!

@rajib76 please check dude.

Copy link
Contributor

@poshinchen poshinchen Dec 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Users will not be able to view the accumulated tokens in the span / trace level after removing these attributes. Users will have to aggregate by themselves which I personally don't want it to happe. I think there could be a better way to prevent double counting.

Will get back after some investigation.

Copy link

@fenil210-cactus fenil210-cactus Dec 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the cost is actually getting calculated twice and the same data is flowing to the Langfuse dashboard as well.

PFA screenshot. You can see how 0.048 is being calculated twice and then shown as 0.096 as the final value. It effectively doubles the cost for every operation. This is a serious concern. People might assume the issue is in their own flow, but in reality the problem is in how the data is being sent to Langfuse via OTEL and then getting calculated twice.

image

@kunalsawhney
Copy link

This bug made me go crazy while trying to understand the usage. Please fix this asap.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants