feat(cli/tui): add hourly usage report — CLI subcommand + TUI tab#359
feat(cli/tui): add hourly usage report — CLI subcommand + TUI tab#359crhan wants to merge 8 commits intojunhoyeo:mainfrom
Conversation
…oggle Support hour-granularity token consumption tracking: - Core: HourlyUsage/HourlyReport structs, get_hourly_report() with local timezone support (derives hour slot from UnifiedMessage.timestamp) - CLI: `tokscale hourly` subcommand with full client/date filters, table and JSON output, Source column showing which tool was used - TUI: dedicated Hourly tab (sort, scroll, striped rows, current-hour highlight) mirroring Daily tab patterns - Overview: press 'h' to toggle bar chart between daily/hourly granularity Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add "Cache%" column to hourly table showing cache hit multiplier - Calculate cache efficiency as cache_read / (input + cache_write) - Display "∞" for infinite ratio (cache reads, zero paid input) - Display "—" for no cache activity - Update cache calculation logic in app state - Enhance hourly and daily UI models with ratio formatting Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Distinguish genuine human input from API message count (Msgs). Detect user→assistant boundaries in Claude Code JSONL, filtering out tool_result and system messages that also use type:"user". Turn/Msgs ratio reveals agent depth per interaction. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Propagate turn_count and message_count through TUI data structs, cache layer, and renderers (both narrow and full-width layouts). Also add turnCount/messageCount to daily JSON export. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…rly tab The Hourly subcommand was missing the --kilo CLI arg (added upstream in junhoyeo#353) and omitted kilo from its ClientFlags initializer. Add both. Also fix the kilocode help text in the Hourly command (was "Show only Kilo usage", should match other commands: "Show only KiloCode usage"). Update TUI tab tests to reflect the new Hourly tab inserted between Daily and Stats: test_tab_all now expects 6 tabs, and tab_next/tab_prev/backtab key-switch tests include the Hourly→Stats and Stats→Hourly transitions. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
@crhan is attempting to deploy a commit to the Inevitable Team on Vercel. A member of the Team first needs to authorize it. |
There was a problem hiding this comment.
8 issues found across 14 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="crates/tokscale-core/src/sessions/claudecode.rs">
<violation number="1" location="crates/tokscale-core/src/sessions/claudecode.rs:318">
P2: `is_human_turn` treats any user content starting with `<` as a system message and relies on exact raw-string matching. Legitimate user prompts that begin with `<` (e.g., HTML snippets) will be misclassified, leaving `pending_turn_start` unset and undercounting turns.</violation>
</file>
<file name="crates/tokscale-cli/src/tui/app.rs">
<violation number="1" location="crates/tokscale-cli/src/tui/app.rs:511">
P2: Hourly default newest-first sort is only applied on reset paths, so starting directly on Hourly uses the wrong initial sort (Cost/Descending).</violation>
<violation number="2" location="crates/tokscale-cli/src/tui/app.rs:512">
P2: Hourly tab sort is unintentionally forced back to Date/Descending on every sort action, so Cost/Tokens sorting cannot persist.</violation>
</file>
<file name="crates/tokscale-core/src/lib.rs">
<violation number="1" location="crates/tokscale-core/src/lib.rs:1213">
P2: Hourly report returns `clients`/`models` in non-deterministic order because `HashSet` iteration is unsorted. This can lead to unstable JSON output and flaky comparisons; sort these vectors before returning.</violation>
</file>
<file name="crates/tokscale-cli/src/tui/ui/overview.rs">
<violation number="1" location="crates/tokscale-cli/src/tui/ui/overview.rs:76">
P2: render_chart now sorts and allocates the entire hourly dataset every render, even though only 60 points are displayed. With hourly history this can be much larger and the full sort becomes a hot-path render cost, risking UI lag.</violation>
</file>
<file name="crates/tokscale-cli/src/tui/cache.rs">
<violation number="1" location="crates/tokscale-cli/src/tui/cache.rs:57">
P2: Cache schema version is unchanged after adding new cached fields. Old version-2 cache files will deserialize with default empty hourly/count data and still be treated as `Fresh`, so the new hourly/count UI can show blank values until cache expiry. Consider bumping `CACHE_SCHEMA_VERSION` to force a refresh when schema changes.</violation>
</file>
<file name="crates/tokscale-core/src/sessions/mod.rs">
<violation number="1" location="crates/tokscale-core/src/sessions/mod.rs:39">
P2: Adding a non-optional `is_turn_start` field to `UnifiedMessage` without `#[serde(default)]` breaks deserialization of previously cached data (bincode cache stores `Vec<UnifiedMessage>`). Older cache files will fail to load and be discarded. Consider adding a default or bumping the cache schema.</violation>
</file>
<file name="crates/tokscale-cli/src/main.rs">
<violation number="1" location="crates/tokscale-cli/src/main.rs:790">
P2: Hourly subcommand bypasses the existing TUI/light dispatch logic, so `tokscale hourly` can never open the TUI in interactive mode and the `--light` flag is ignored (passed as `_light_or_json` but unused).</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| // String content — check for XML-tagged system messages | ||
| if after_trimmed.len() > 1 { | ||
| let content_start = &after_trimmed[1..]; | ||
| if content_start.starts_with('<') { |
There was a problem hiding this comment.
P2: is_human_turn treats any user content starting with < as a system message and relies on exact raw-string matching. Legitimate user prompts that begin with < (e.g., HTML snippets) will be misclassified, leaving pending_turn_start unset and undercounting turns.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At crates/tokscale-core/src/sessions/claudecode.rs, line 318:
<comment>`is_human_turn` treats any user content starting with `<` as a system message and relies on exact raw-string matching. Legitimate user prompts that begin with `<` (e.g., HTML snippets) will be misclassified, leaving `pending_turn_start` unset and undercounting turns.</comment>
<file context>
@@ -277,6 +297,34 @@ fn extract_claude_headless_message(
+ // String content — check for XML-tagged system messages
+ if after_trimmed.len() > 1 {
+ let content_start = &after_trimmed[1..];
+ if content_start.starts_with('<') {
+ return false;
+ }
</file context>
| ChartGranularity::Hourly => { | ||
| let hourly = &app.data.hourly; | ||
| let mut sorted: Vec<_> = hourly.iter().collect(); | ||
| sorted.sort_by(|a, b| a.datetime.cmp(&b.datetime)); |
There was a problem hiding this comment.
P2: render_chart now sorts and allocates the entire hourly dataset every render, even though only 60 points are displayed. With hourly history this can be much larger and the full sort becomes a hot-path render cost, risking UI lag.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At crates/tokscale-cli/src/tui/ui/overview.rs, line 76:
<comment>render_chart now sorts and allocates the entire hourly dataset every render, even though only 60 points are displayed. With hourly history this can be much larger and the full sort becomes a hot-path render cost, risking UI lag.</comment>
<file context>
@@ -40,36 +40,66 @@ pub fn render(frame: &mut Frame, app: &mut App, area: Rect) {
+ ChartGranularity::Hourly => {
+ let hourly = &app.data.hourly;
+ let mut sorted: Vec<_> = hourly.iter().collect();
+ sorted.sort_by(|a, b| a.datetime.cmp(&b.datetime));
- StackedBarData {
</file context>
| agents: Vec<CachedAgentUsage>, | ||
| daily: Vec<CachedDailyUsage>, | ||
| #[serde(default)] | ||
| hourly: Vec<CachedHourlyUsage>, |
There was a problem hiding this comment.
P2: Cache schema version is unchanged after adding new cached fields. Old version-2 cache files will deserialize with default empty hourly/count data and still be treated as Fresh, so the new hourly/count UI can show blank values until cache expiry. Consider bumping CACHE_SCHEMA_VERSION to force a refresh when schema changes.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At crates/tokscale-cli/src/tui/cache.rs, line 57:
<comment>Cache schema version is unchanged after adding new cached fields. Old version-2 cache files will deserialize with default empty hourly/count data and still be treated as `Fresh`, so the new hourly/count UI can show blank values until cache expiry. Consider bumping `CACHE_SCHEMA_VERSION` to force a refresh when schema changes.</comment>
<file context>
@@ -53,6 +53,8 @@ struct CachedUsageData {
agents: Vec<CachedAgentUsage>,
daily: Vec<CachedDailyUsage>,
+ #[serde(default)]
+ hourly: Vec<CachedHourlyUsage>,
graph: Option<CachedGraphData>,
total_tokens: u64,
</file context>
| pub dedup_key: Option<String>, | ||
| /// True if this message is the first assistant response after a user turn. | ||
| /// Used to count user interaction turns (as opposed to API message count). | ||
| pub is_turn_start: bool, |
There was a problem hiding this comment.
P2: Adding a non-optional is_turn_start field to UnifiedMessage without #[serde(default)] breaks deserialization of previously cached data (bincode cache stores Vec<UnifiedMessage>). Older cache files will fail to load and be discarded. Consider adding a default or bumping the cache schema.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At crates/tokscale-core/src/sessions/mod.rs, line 39:
<comment>Adding a non-optional `is_turn_start` field to `UnifiedMessage` without `#[serde(default)]` breaks deserialization of previously cached data (bincode cache stores `Vec<UnifiedMessage>`). Older cache files will fail to load and be discarded. Consider adding a default or bumping the cache schema.</comment>
<file context>
@@ -34,6 +34,9 @@ pub struct UnifiedMessage {
pub dedup_key: Option<String>,
+ /// True if this message is the first assistant response after a user turn.
+ /// Used to count user interaction turns (as opposed to API message count).
+ pub is_turn_start: bool,
}
</file context>
| pub is_turn_start: bool, | |
| #[serde(default)] | |
| pub is_turn_start: bool, |
| }); | ||
| let (since, until) = build_date_filter(today, week, month, since, until); | ||
| let year = normalize_year_filter(today, week, month, year); | ||
| run_hourly_report( |
There was a problem hiding this comment.
P2: Hourly subcommand bypasses the existing TUI/light dispatch logic, so tokscale hourly can never open the TUI in interactive mode and the --light flag is ignored (passed as _light_or_json but unused).
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At crates/tokscale-cli/src/main.rs, line 790:
<comment>Hourly subcommand bypasses the existing TUI/light dispatch logic, so `tokscale hourly` can never open the TUI in interactive mode and the `--light` flag is ignored (passed as `_light_or_json` but unused).</comment>
<file context>
@@ -684,6 +739,68 @@ fn main() -> Result<()> {
+ });
+ let (since, until) = build_date_filter(today, week, month, since, until);
+ let year = normalize_year_filter(today, week, month, year);
+ run_hourly_report(
+ json || light,
+ json,
</file context>
…stence on Hourly tab Two bugs identified in code review: 1. lib.rs: HourlyUsage.clients and .models were populated via HashSet iteration, producing non-deterministic ordering in JSON output and display. Sort both vecs before returning. 2. tui/app.rs: set_sort() called reset_selection() which forced sort_field back to Date/Descending whenever the user pressed c/t on the Hourly tab. Extract the per-tab sort default into apply_tab_sort_defaults() and call it only from tab-switch handlers (Tab, BackTab, Left, Right), not from reset_selection. This lets Cost and Tokens sorting persist while the user is on the Hourly tab. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
Thanks for the thorough review. Addressed in 974b28c: Fixed:
Won't fix / false positives:
|
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="crates/tokscale-cli/src/tui/app.rs">
<violation number="1" location="crates/tokscale-cli/src/tui/app.rs:517">
P2: Mouse-based tab switching bypasses `apply_tab_sort_defaults`, so entering Hourly by click can preserve stale sort state instead of defaulting to newest-first.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| @@ -8,7 +8,7 @@ use crossterm::event::{KeyCode, KeyEvent, KeyModifiers, MouseButton, MouseEvent, | |||
| use ratatui::layout::Rect; | |||
There was a problem hiding this comment.
P2: Mouse-based tab switching bypasses apply_tab_sort_defaults, so entering Hourly by click can preserve stale sort state instead of defaulting to newest-first.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At crates/tokscale-cli/src/tui/app.rs, line 517:
<comment>Mouse-based tab switching bypasses `apply_tab_sort_defaults`, so entering Hourly by click can preserve stale sort state instead of defaulting to newest-first.</comment>
<file context>
@@ -506,8 +510,12 @@ impl App {
- // Hourly tab defaults to newest-first; other tabs keep cost sort
+ /// Apply per-tab sort defaults when switching tabs.
+ /// Must be called AFTER updating `self.current_tab`, before `reset_selection`.
+ fn apply_tab_sort_defaults(&mut self) {
+ // Hourly tab shows time-ordered data by default; other tabs keep cost sort.
if self.current_tab == Tab::Hourly {
</file context>
Display cost per million tokens alongside absolute cost in all models report table variants (model-only, client,model, client,provider,model). Helps compare model efficiency at a glance — higher token usage with lower Cost/1M means better cache utilization or a cheaper model. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="crates/tokscale-cli/src/main.rs">
<violation number="1" location="crates/tokscale-cli/src/main.rs:2561">
P2: `Cost/1M` formatting lacks non-finite float guards, so NaN/inf costs can be rendered as invalid output.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| if total_tokens == 0 { | ||
| "—".to_string() | ||
| } else { | ||
| let cost_per_m = cost * 1_000_000.0 / total_tokens as f64; | ||
| format!("${:.2}/M", cost_per_m) | ||
| } |
There was a problem hiding this comment.
P2: Cost/1M formatting lacks non-finite float guards, so NaN/inf costs can be rendered as invalid output.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At crates/tokscale-cli/src/main.rs, line 2561:
<comment>`Cost/1M` formatting lacks non-finite float guards, so NaN/inf costs can be rendered as invalid output.</comment>
<file context>
@@ -2506,6 +2557,15 @@ fn format_currency(n: f64) -> String {
}
+fn format_cost_per_million(cost: f64, total_tokens: i64) -> String {
+ if total_tokens == 0 {
+ "—".to_string()
+ } else {
</file context>
| if total_tokens == 0 { | |
| "—".to_string() | |
| } else { | |
| let cost_per_m = cost * 1_000_000.0 / total_tokens as f64; | |
| format!("${:.2}/M", cost_per_m) | |
| } | |
| if total_tokens <= 0 || !cost.is_finite() { | |
| "—".to_string() | |
| } else { | |
| let cost_per_m = cost * 1_000_000.0 / total_tokens as f64; | |
| if !cost_per_m.is_finite() { | |
| "—".to_string() | |
| } else { | |
| format!("${:.2}/M", cost_per_m) | |
| } | |
| } |
Summary
tokscale hourlyCLI subcommand with--json,--light, date filters, and all client flags (including the newly added--kilo)h) to switch between daily and hourly breakdownCache%to clarify it's a multiplier not a percentageDetails
The hourly report aggregates token usage, cost, turn count, and message count by hour. The Turn count uses a 10-second dedup window to collapse rapid consecutive user messages into a single logical turn (matching CC's session behavior).
Cache× is defined as
cache_read_tokens / input_tokens— a value > 1 means more tokens were served from cache than paid for as fresh input, indicating good cache utilization.Test notes
Three pre-existing scanner tests (
test_scan_all_clients_claude,test_scan_all_clients_multiple,test_scan_all_clients_headless_paths) fail on upstreammainas well — they scan the developer's real~/.claude/projectsinstead of the temp dir due to an unrelated env/path issue. Not introduced by this PR.🤖 Generated with Claude Code