Skip to content

fix: ITURHFProp service IS alive — was being killed by tight timeouts#824

Merged
accius merged 1 commit intomainfrom
Staging
Mar 24, 2026
Merged

fix: ITURHFProp service IS alive — was being killed by tight timeouts#824
accius merged 1 commit intomainfrom
Staging

Conversation

@accius
Copy link
Owner

@accius accius commented Mar 24, 2026

The proppy-production service works and returns precise P.533-14 data, but the 10s timeout was too tight for 24h×10band calculations that take 3-8s. One slow response triggered a 2-minute backoff that blocked ALL users from getting precise predictions.

  • Hourly timeout: 10s → 20s (P.533 needs time)
  • Single-hour timeout: 8s → 12s
  • Backoff: only after 3 consecutive failures (was 1)
  • Backoff duration: 2 min → 30s
  • Reset fail counter on any successful response

The service is healthy — it just needs patience. With the LRU cache (200 entries, 30-min TTL) + pre-warming from DX spots, most user clicks will hit cache anyway. The longer timeout only matters for the first request to a new path.

What does this PR do?

Type of change

  • Bug fix
  • New feature
  • Performance improvement
  • Refactor / code cleanup
  • Documentation
  • Translation
  • Map layer plugin

How to test

Checklist

  • App loads without console errors
  • Tested in Dark, Light, and Retro themes
  • Responsive at different screen sizes (desktop + mobile)
  • If touching server.js: caches have TTLs and size caps (we serve 2,000+ concurrent users)
  • If adding an API route: includes caching and error handling
  • If adding a panel: wired into Modern, Classic, and Dockable layouts
  • No hardcoded colors — uses CSS variables (var(--accent-cyan), etc.)
  • No .bak, .old, console.log debug lines, or test scripts included

Screenshots (if visual change)

The proppy-production service works and returns precise P.533-14
data, but the 10s timeout was too tight for 24h×10band calculations
that take 3-8s. One slow response triggered a 2-minute backoff
that blocked ALL users from getting precise predictions.

- Hourly timeout: 10s → 20s (P.533 needs time)
- Single-hour timeout: 8s → 12s
- Backoff: only after 3 consecutive failures (was 1)
- Backoff duration: 2 min → 30s
- Reset fail counter on any successful response

The service is healthy — it just needs patience. With the LRU cache
(200 entries, 30-min TTL) + pre-warming from DX spots, most user
clicks will hit cache anyway. The longer timeout only matters for
the first request to a new path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@accius accius merged commit 71da431 into main Mar 24, 2026
2 of 6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant