Skip to content

Revive garth: replace HTTP transport with Camoufox browser automation#225

Draft
nrvim wants to merge 2 commits intomatin:mainfrom
nrvim:feat/camoufox-browser-transport
Draft

Revive garth: replace HTTP transport with Camoufox browser automation#225
nrvim wants to merge 2 commits intomatin:mainfrom
nrvim:feat/camoufox-browser-transport

Conversation

@nrvim
Copy link
Copy Markdown

@nrvim nrvim commented Mar 31, 2026

Summary

This PR revives garth by replacing the broken HTTP transport with Camoufox browser automation. Garmin made multiple changes that broke all Python access: the SSO login endpoint (/mobile/api/login) was disabled and returns 400, the OAuth token endpoint (/oauth-service/oauth/preauthorized) was disabled and returns 401, and aggressive Cloudflare bot detection was deployed that blocks every Python HTTP client including requests, httpx, and curl_cffi, even with Chrome TLS impersonation. Extracting cookies from a browser and replaying them in Python also fails. There is no HTTP-level fix.

The only working method is page.evaluate(fetch(...)) inside a real browser. This PR implements that.

Why

garth was deprecated on March 28, 2026 because Garmin's changes made it unfixable at the HTTP level. But garth is not just a library. It's the foundation of an ecosystem.

python-garminconnect with 1.7K stars depends on garth for auth. The Home Assistant Garmin integration with 452 stars depends on python-garminconnect. GarminDB with 3K stars, garminexport with 564 stars, garmin-connect-export with 486 stars are all broken. Dozens of MCP servers, fitness tools, and personal health projects are all dead.

The maintainers of these projects didn't choose to break. Garmin forced it. This PR gives them a path back.

What changed

Source files (7 files)

File Change
sso.py Rewritten. Browser-based SSO login via Camoufox. Handles MFA in-browser. Session cookies saved to ~/.garth/ with roughly 1 year lifetime.
http.py Rewritten. Client.request() routes through page.evaluate(fetch(...)). Binary download via FileReader/blob. Upload via FormData.
data/_base.py Data.list() runs sequentially instead of ThreadPoolExecutor because Playwright cannot be called from multiple threads.
exc.py GarthHTTPError.error changed from requests.HTTPError to base Exception.
telemetry.py Removed requests.Session/Response dependency. Sanitization utilities preserved.
__init__.py Removed deprecation warning. Added close export for browser cleanup.
pyproject.toml Dependencies changed from requests and requests-oauthlib to camoufox and playwright. Status changed from Inactive to Beta.

Test files (5 files)

File Change
conftest.py Cassette replay via YAML loading instead of VCR HTTP interception. Same cassette data, different replay mechanism.
test_sso.py Rewritten for browser login. Covers exchange, placeholder tokens, sentinel detection, and close cleanup.
test_http.py Adapted. Token persistence tests preserved. Added browser token warning, proxy/ssl_verify warnings, multi-file upload rejection, and download via mock page.
test_telemetry.py Simplified. Sanitization tests preserved. Removed requests-dependent session and response hook tests.
test_cli.py Mocks sso.login() instead of VCR replay.

Not changed

All data models in the data/ directory are untouched. All stats classes in the stats/ directory are untouched. auth_tokens.py, utils.py, cli.py, and the users/ module are all untouched. All cassette YAML files are preserved as test data.

How it works

User code (garth.connectapi, garth.login, etc.)
    │
    ▼
garth.http.Client.request()
    │
    ▼
page.evaluate(fetch(...))  ← runs inside Camoufox browser
    │
    ▼
Garmin Connect API  ← sees a real browser, Cloudflare allows it

First, garth.login(email, password) launches a headless Camoufox browser, navigates to Garmin SSO, fills the login form, and handles MFA if needed. Then saves session cookies.

After that, Client.request() routes all API calls through page.evaluate(fetch(...)), which is JavaScript fetch() running inside the browser context.

Session cookies persist to ~/.garth/browser_session.json for roughly 1 year. Subsequent runs restore cookies so no login or MFA is needed.

What I tested before submitting

I tested every approach before landing on browser automation:

Approach Result
requests with cookies 403 Forbidden
curl_cffi with Chrome TLS impersonation 403 Forbidden
httpx with extracted browser cookies 403 Forbidden
garth's SSO login (/mobile/api/login) 400 Bad Request, endpoint disabled
OAuth1 preauthorized endpoint 401 Unauthorized, endpoint disabled
Browser cookies replayed in Python 403 Forbidden
page.evaluate(fetch(...)) in Camoufox 200 OK

Test results

137 unit tests passing, down from 163 because tests for dead HTTP code were removed while all data model tests were kept.

72 out of 72 live integration tests passing against the real Garmin API, covering 24 connectapi endpoints, 14 data model .get() calls, 13 data model .list() calls, 12 stats class .list() calls, binary FIT file download, and token persistence through dump, load, dumps, and loads.

Known limitations

Limitation Details
Not thread-safe Playwright uses greenlets bound to a single thread. page.evaluate() cannot be called from multiple threads. Data.list() was changed to run sequentially instead of using ThreadPoolExecutor.
Single session Module-level browser state means one active Garmin session at a time. This matches garth's existing singleton pattern with garth.client. Multi-account requires separate processes.
Memory Camoufox uses roughly 200 to 300 MB RAM for the headless browser.
Disk Camoufox and Playwright install roughly 500 MB of browser binaries.
Startup First login takes 10 to 15 seconds for browser launch and SSO. Subsequent runs with cached cookies take 3 to 5 seconds.
Placeholder tokens login() returns placeholder OAuth tokens with a sentinel value of "browser_session". The browser session handles real auth via cookies. Code that inspects token contents will see fake values. load() and loads() detect these and warn users to call login().
return_on_mfa not supported Raises NotImplementedError. MFA is handled interactively in the browser via the prompt_mfa callback.
proxies and ssl_verify not supported Accepted for API compatibility but emit warnings. Browser transport handles networking differently.

Intent

This is not a small patch. It's a fundamental transport change forced by Garmin's decisions to disable auth endpoints and deploy aggressive bot detection. The goal is to:

  1. Unblock the ecosystem so every downstream project can work again by updating garth
  2. Preserve the API surface so garth.login(), garth.connectapi(), garth.download(), data models, and stats classes all work exactly as before
  3. Preserve test data so all cassette YAML files with real API responses are kept as test fixtures
  4. Be honest about tradeoffs because browser transport is heavier than HTTP, single-threaded, and requires roughly 500 MB of browser binaries, but it works, and the HTTP path does not

I'm not asking garth to endorse browser automation as an ideal architecture. I'm saying the community needs garth to work, and this is the only way to make it work today, unless someone in the community finds a different approach.

Summary by CodeRabbit

  • New Features

    • Browser-based authentication and interactive MFA with a new close() for session cleanup
  • Improvements

    • Project status updated to Beta
    • HTTP operations now routed through a browser-driven transport
    • Sequential request execution (concurrency defaults to single-worker)
    • Telemetry no longer hooks HTTP sessions; proxy/ssl options now warn
  • Chores

    • Updated runtime/testing dependencies (added camoufox, playwright, pyyaml; removed requests-related packages)

Copilot AI review requested due to automatic review settings March 31, 2026 11:59
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 31, 2026

Walkthrough

Replaces requests-based HTTP/SSO with a browser-driven Camoufox/Playwright transport, removes concurrent Data.list fetching (MAX_WORKERS→1), strips telemetry session hooks, updates dependencies, and migrates tests from VCR to a cassette replay approach.

Changes

Cohort / File(s) Summary
Project metadata
pyproject.toml
Replaced runtime deps requests, requests-oauthlib, logfire with camoufox>=0.4, playwright>=1.40; test deps: pytest-vcrpyyaml>=6.0; classifier set to Beta.
SSO & module exports
src/garth/sso.py, src/garth/__init__.py
Replaced HTTP SSO with browser-driven login flow, added browser state/singleton, session persistence, browser-token sentinel helpers, login/exchange now browser-aware, handle_mfa/resume_login disabled; exported close.
HTTP transport
src/garth/http.py
Switched Client to evaluate in-browser fetch via sso.get_page(). Added _Response wrapper, adjusted Client signatures/returns, implemented browser upload/download (base64/FormData), warn on unsupported proxies/ssl, detect browser-backed tokens.
Telemetry
src/garth/telemetry.py
Removed session response-hooking and attach() logic; no longer installs HTTP hooks; model_post_init is no-op; retained callback/logfire configuration paths but without session wiring.
Data fetching
src/garth/data/_base.py
Removed ThreadPoolExecutor; Data.list now iterates sequentially, MAX_WORKERS changed 10→1 and default max_workers hard-set to 1 (parameter accepted but ignored).
Exceptions
src/garth/exc.py
Removed import of requests.HTTPError; GarthHTTPError.error type generalized to Exception.
Tests & fixtures
tests/conftest.py, tests/*
Removed VCR integration; added cassette replay loader (cassettes/<test_name>.yaml) and request-replay monkeypatching; updated tests to assert browser-token warnings, _Response behavior, browser upload/download, and simplified CLI/SSO/telemetry unit tests. Many VCR/integration tests removed or rewritten.
Exports
src/garth/__init__.py
Added close to __all__.

Sequence Diagram(s)

sequenceDiagram
    participant Client as Client
    participant Browser as Browser (Camoufox/Playwright)
    participant Server as Garmin Server

    rect rgba(100, 150, 200, 0.5)
    Note over Client,Browser: Browser-based SSO/Login
    Client->>Browser: launch()/navigate()
    Browser->>Server: GET SSO page
    Server-->>Browser: login form + CSRF
    Browser->>Browser: fill/submit credentials (interactive MFA if needed)
    Browser->>Server: POST credentials / MFA
    Server-->>Browser: set session cookies
    Browser->>Browser: extract CSRF & create placeholder tokens
    Browser-->>Client: return browser-backed token placeholders
    end

    rect rgba(150, 200, 100, 0.5)
    Note over Client,Server: Authenticated Request via Browser fetch
    Client->>Browser: evaluate(fetch("/gc-api/..." + params), headers)
    Browser->>Server: HTTP request with cookies/CSRF
    Server-->>Browser: response (JSON/binary)
    Browser-->>Client: base64 content / JSON wrapped in _Response
    Client->>Client: decode/process response
    end
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

Suggested reviewers

  • matin
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 31.37% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title clearly and concisely summarizes the primary change: replacing HTTP transport with browser-based Camoufox automation, which is the main architectural shift across the codebase.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Revives garth by replacing the broken HTTP-based transport/authentication flow with Camoufox/Playwright browser automation, routing API calls through page.evaluate(fetch(...)) and updating persistence + tests accordingly.

Changes:

  • Rewrote SSO + HTTP transport to use a real browser session (cookies/CSRF), including binary download/upload support.
  • Adjusted data fetching to be sequential (no thread pool) due to Playwright thread constraints, and simplified telemetry to remove requests coupling.
  • Updated dependencies and rewired tests from pytest-vcr interception to YAML cassette replay.

Reviewed changes

Copilot reviewed 12 out of 13 changed files in this pull request and generated 11 comments.

Show a summary per file
File Description
uv.lock Locks new Camoufox/Playwright dependency graph and removes oauth/VCR-related packages.
pyproject.toml Swaps runtime deps to camoufox/playwright, updates status classifier, adjusts test deps.
src/garth/sso.py Implements browser-driven login, session cookie persistence, CSRF extraction, and lifecycle helpers.
src/garth/http.py Implements browser-based request pipeline via page.evaluate(fetch(...)), plus upload/download helpers and token persistence warnings.
src/garth/data/_base.py Removes ThreadPoolExecutor usage and fetches date ranges sequentially.
src/garth/telemetry.py Removes requests hook integration while keeping sanitization + optional logfire reporting.
src/garth/exc.py Generalizes GarthHTTPError.error type away from requests.HTTPError.
src/garth/__init__.py Removes deprecation warning and exports close for browser cleanup.
tests/conftest.py Replaces VCR interception with YAML cassette replay by monkeypatching Client.request() for @pytest.mark.vcr tests.
tests/test_sso.py Updates SSO tests for placeholder tokens/sentinels and browser lifecycle helpers.
tests/test_http.py Updates HTTP tests for browser session warnings, unsupported config warnings, upload/download behavior, and refresh persistence.
tests/test_telemetry.py Simplifies telemetry tests to focus on sanitization + basic configuration.
tests/test_cli.py Switches CLI login test to mocking sso.login() instead of VCR replay.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 176 to +186
def request(
self,
method: str,
subdomain: str,
path: str,
/,
api: bool = False,
referrer: str | bool = False,
headers: dict = {},
**kwargs,
) -> Response:
url = f"https://{subdomain}.{self.domain}"
url = urljoin(url, path)
if referrer is True and self.last_resp:
headers["referer"] = self.last_resp.url
if api:
assert self.oauth1_token, (
"OAuth1 token is required for API requests"
) -> _Response:
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Client.request() uses a mutable default argument (headers: dict = {}). This can leak headers between calls if the dict is mutated. Use headers: dict | None = None and create a new dict inside the method.

Copilot uses AI. Check for mistakes.
Comment on lines 176 to +206
def request(
self,
method: str,
subdomain: str,
path: str,
/,
api: bool = False,
referrer: str | bool = False,
headers: dict = {},
**kwargs,
) -> Response:
url = f"https://{subdomain}.{self.domain}"
url = urljoin(url, path)
if referrer is True and self.last_resp:
headers["referer"] = self.last_resp.url
if api:
assert self.oauth1_token, (
"OAuth1 token is required for API requests"
) -> _Response:
"""Make an API request through the browser.

All requests are routed through ``page.evaluate(fetch(...))``
in the Camoufox browser, bypassing Cloudflare.
"""
page = sso.get_page()
csrf = sso.get_csrf()

if not page:
raise GarthException(
msg="Not logged in — call login() first"
)
if (
not isinstance(self.oauth2_token, OAuth2Token)
or self.oauth2_token.expired
):
self.refresh_oauth2()
headers["Authorization"] = str(self.oauth2_token)
self.last_resp = self.sess.request(
method,

url = f"/gc-api/{path.lstrip('/')}"

params = kwargs.get("params")
if params:
from urllib.parse import urlencode
url += "?" + urlencode(params)

Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Client.request() currently ignores the subdomain, api, and referrer parameters but still exposes them in the public API. This is a behavioral change (e.g., client.get("connect", "/") will now fetch /gc-api/ and likely 404). Either implement subdomain-specific routing (at least for the previously supported subdomains) or explicitly reject/ warn when unsupported values are passed to avoid silent breakage.

Copilot uses AI. Check for mistakes.
Comment on lines +207 to +264
json_body = kwargs.get("json")
data = kwargs.get("data")

# Handle file uploads
files = kwargs.get("files")
if files:
if len(files) > 1:
raise GarthException(
msg="Only single file upload is supported"
)
return self._upload_via_browser(
page, csrf, url, files
)

result = page.evaluate("""
async ([url, method, jsonBody, formData, csrf,
extraHeaders]) => {
const h = {
'connect-csrf-token': csrf || '',
'Accept': 'application/json'
};
if (extraHeaders) {
Object.assign(h, extraHeaders);
}
let body = undefined;
if (jsonBody) {
h['Content-Type'] = 'application/json';
body = JSON.stringify(jsonBody);
} else if (formData) {
h['Content-Type'] =
'application/x-www-form-urlencoded';
body = formData;
}
try {
const resp = await fetch(url, {
method: method || 'GET',
credentials: 'include',
headers: h,
body: body
});
const text = await resp.text();
return {
status: resp.status,
text: text,
url: resp.url
};
} catch(e) {
return {status: 0, error: e.message};
}
}
""", [
url,
headers=headers,
timeout=self.timeout,
**kwargs,
)
try:
self.last_resp.raise_for_status()
except HTTPError as e:
method.upper(),
json_body,
data if isinstance(data, str) else None,
csrf,
headers if headers else None,
])
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Client.request() only forwards data when it is a pre-encoded str (otherwise it sends null). Previously callers could pass a dict/bytes and have it encoded by the HTTP client. To preserve compatibility, consider accepting dict[str, str] (URL-encode it) and bytes (base64 or ArrayBuffer) in addition to str.

Copilot uses AI. Check for mistakes.
Comment on lines +284 to +290
def _upload_via_browser(
self, page, csrf: str, url: str, files: dict
) -> _Response:
"""Upload a single file via browser FormData."""
field_name, (filename, fp) = next(iter(files.items()))
file_bytes = fp.read()
b64_data = base64.b64encode(file_bytes).decode()
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_upload_via_browser() declares csrf: str, but callers pass through sso.get_csrf() which can be None (before _try_setup() succeeds). Update the type to str | None (or coerce to "" at the call site) to reflect actual values and avoid type-checker/runtime issues if code starts relying on string methods.

Copilot uses AI. Check for mistakes.
Comment on lines +284 to +301
def _upload_via_browser(
self, page, csrf: str, url: str, files: dict
) -> _Response:
"""Upload a single file via browser FormData."""
field_name, (filename, fp) = next(iter(files.items()))
file_bytes = fp.read()
b64_data = base64.b64encode(file_bytes).decode()

result = page.evaluate("""
async ([url, b64Data, fileName, csrf]) => {
const resp = await fetch(
'data:application/octet-stream;base64,'
+ b64Data
);
const blob = await resp.blob();
const formData = new FormData();
formData.append('file', blob, fileName);
try {
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In _upload_via_browser(), field_name is unpacked but never used, which is misleading (it looks like the caller-specified field name is respected, but the JS always appends under 'file'). Either use field_name when building the FormData, or explicitly ignore it (e.g., _field_name) and document that the field name is fixed.

Copilot uses AI. Check for mistakes.
src/garth/sso.py Outdated
Comment on lines +124 to +138
# Navigate to Garmin Connect
try:
_page.goto(
"https://connect.garmin.com/modern/",
wait_until="domcontentloaded",
timeout=30000,
)
except Exception as e:
log.debug("Initial navigation error (may be expected): %s", e)
time.sleep(3)

# Check if session is valid
url = _page.url
needs_login = "sso.garmin.com" in url or not _try_setup()

Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Browser navigation is hardcoded to https://connect.garmin.com/modern/. This will break client.domain == "garmin.cn" (and any future domains), even though _do_login() attempts to support garmin.cn. Use client.domain to derive the correct Connect host for the initial navigation and session validation (and keep it consistent with _do_login()).

Copilot uses AI. Check for mistakes.
src/garth/sso.py Outdated
Comment on lines +236 to +246
def _try_setup() -> bool:
"""Extract CSRF token and verify session is valid."""
global _csrf
current = _page.url
if "/modern/" not in current:
try:
_page.goto(
"https://connect.garmin.com/modern/",
wait_until="domcontentloaded",
)
except Exception as e:
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_try_setup() navigates to https://connect.garmin.com/modern/ when the current URL isn't under /modern/. Like the initial navigation in login(), this hardcodes the Connect host and will break non-.com domains (e.g., garmin.cn). Consider storing the resolved Connect base URL when launching the browser and reusing it here.

Copilot uses AI. Check for mistakes.
src/garth/sso.py Outdated
Comment on lines +105 to +122
_browser = _cm.__enter__()
_page = _browser.new_page()
context = _page.context

# Load saved session cookies
if session_file.exists():
try:
session = json.loads(session_file.read_text())
age_days = (
time.time() - session.get("saved_at", 0)
) / 86400
if age_days < 364 and session.get("cookies"):
context.add_cookies(session["cookies"])
log.info(
"Loaded session cookies (%.0f days old)", age_days
)
except Exception as e:
log.debug("Could not load session cookies: %s", e)
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

login() manually calls _cm.__enter__() but doesn't guarantee cleanup if an exception is raised mid-login (e.g., navigation timeout, selector change). Wrapping the login flow in try/except with close() in a finally (or using with Camoufox(...) as browser:) would prevent leaking a running browser process on failures.

Suggested change
_browser = _cm.__enter__()
_page = _browser.new_page()
context = _page.context
# Load saved session cookies
if session_file.exists():
try:
session = json.loads(session_file.read_text())
age_days = (
time.time() - session.get("saved_at", 0)
) / 86400
if age_days < 364 and session.get("cookies"):
context.add_cookies(session["cookies"])
log.info(
"Loaded session cookies (%.0f days old)", age_days
)
except Exception as e:
log.debug("Could not load session cookies: %s", e)
try:
_browser = _cm.__enter__()
_page = _browser.new_page()
context = _page.context
# Load saved session cookies
if session_file.exists():
try:
session = json.loads(session_file.read_text())
age_days = (
time.time() - session.get("saved_at", 0)
) / 86400
if age_days < 364 and session.get("cookies"):
context.add_cookies(session["cookies"])
log.info(
"Loaded session cookies (%.0f days old)", age_days
)
except Exception as e:
log.debug("Could not load session cookies: %s", e)
except Exception:
# Ensure that any partially-initialized browser is cleaned up
close()
raise

Copilot uses AI. Check for mistakes.
Comment on lines +113 to +115
headers={},
**kwargs,
):
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mock_request() uses mutable default arguments (headers={}), which can leak state between calls if mutated. Use headers=None and normalize inside the function.

Suggested change
headers={},
**kwargs,
):
headers=None,
**kwargs,
):
headers = {} if headers is None else headers

Copilot uses AI. Check for mistakes.
Comment on lines +135 to +141
if best_idx is None and remaining:
best_idx = 0

if best_idx is None:
raise RuntimeError(
"No cassette interactions remaining"
)
Copy link

Copilot AI Mar 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cassette replay fallback if best_idx is None and remaining: best_idx = 0 will silently return an unrelated interaction when no match is found, which can hide real regressions and make tests non-deterministic. It would be safer to raise with a helpful error message (e.g., expected method/path and a list of remaining cassette URIs) when no match is found.

Suggested change
if best_idx is None and remaining:
best_idx = 0
if best_idx is None:
raise RuntimeError(
"No cassette interactions remaining"
)
if best_idx is None:
expected = f"{method.upper()} {match_path}"
if remaining:
remaining_desc = ", ".join(
f"{i}: {r['request'].get('method', 'GET')} {r['request'].get('uri', '')}"
for i, r in enumerate(remaining)
)
msg = (
"No matching cassette interaction found for "
f"{expected}. Remaining interactions: {remaining_desc}"
)
else:
msg = (
"No cassette interactions remaining; expected "
f"{expected}"
)
raise RuntimeError(msg)

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (7)
tests/test_sso.py (1)

81-90: Consider test isolation for global state tests.

test_close_is_safe_before_login and test_close_clears_all_state both call sso.close() which mutates module-level globals (_browser, _cm, _page, _csrf). If tests run in parallel or in different orders, this could cause flakiness. Consider adding a fixture that ensures clean state before/after these tests.

♻️ Add a fixture to ensure clean state
`@pytest.fixture`(autouse=True)
def _clean_sso_state():
    """Ensure SSO module state is clean before and after each test."""
    sso.close()
    yield
    sso.close()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_sso.py` around lines 81 - 90, Tests mutate module-level SSO
globals via sso.close(), causing potential flakiness; add an autouse fixture
that calls sso.close() before and after each test to guarantee clean state.
Implement a pytest fixture (e.g., _clean_sso_state) that runs autouse=True and
invokes sso.close() before yield and again after yield so tests like
test_close_is_safe_before_login and test_close_clears_all_state run with
_browser/_cm/_page/_csrf cleared regardless of order or parallel execution.
src/garth/telemetry.py (1)

124-125: Consider removing the empty model_post_init or adding a comment.

With _attached_sessions removed, model_post_init now does nothing. Either remove it entirely or add a brief comment explaining why it's kept (e.g., for future extension).

♻️ Option 1: Remove empty method
-    def model_post_init(self, __context):
-        pass
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/garth/telemetry.py` around lines 124 - 125, The empty model_post_init
method is now dead code (since _attached_sessions was removed); either delete
the model_post_init method from the class in src/garth/telemetry.py or replace
its body with a brief comment explaining why it remains (e.g., reserved for
future hooks or backward-compatibility with subclasses overriding
model_post_init), and ensure any subclasses or references expecting
model_post_init are updated accordingly.
tests/test_http.py (2)

80-106: Import statements inside test functions.

The logging import at lines 99 and 124 is placed inside the test functions rather than at the module top. While functional, this is unconventional. Consider moving it to the top-level imports for consistency.

Suggested fix

Add to top of file:

import logging

Then remove the inline imports at lines 99 and 124.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_http.py` around lines 80 - 106, Move the inline "import logging"
out of the test functions and add a single top-level "import logging" to the
module; update the tests that reference logging (e.g.,
test_load_browser_tokens_warns which uses caplog.at_level and any other tests
with inline imports) to remove their internal "import logging" statements so
they rely on the module-level import instead.

275-303: Consider using a NamedTuple for fixture return value.

The fixture yields a 5-element tuple that is unpacked in tests. Using a NamedTuple or dataclass would improve readability and reduce the risk of unpacking errors.

Example improvement
from typing import NamedTuple

class GarthHomeFixture(NamedTuple):
    client: Client
    tempdir: str
    mock_oauth1: OAuth1Token
    mock_oauth2: OAuth2Token
    sso_mod: Any

`@pytest.fixture`
def garth_home_client(monkeypatch: pytest.MonkeyPatch):
    # ... setup code ...
    yield GarthHomeFixture(client, tempdir, mock_oauth1, mock_oauth2, garth.sso)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/test_http.py` around lines 275 - 303, Replace the 5-tuple yield in the
pytest fixture garth_home_client with a NamedTuple (or dataclass) to make
unpacking explicit: define a GarthHomeFixture NamedTuple with fields client,
tempdir, mock_oauth1, mock_oauth2, sso_mod and change the fixture to yield
GarthHomeFixture(client, tempdir, mock_oauth1, mock_oauth2, garth.sso); update
tests that unpack the fixture to access attributes (e.g., fixture.client)
instead of tuple indices to improve readability and avoid unpacking errors.
src/garth/sso.py (1)

19-26: Module-level singleton state is documented but risky.

The module-level globals (_cm, _browser, _page, _csrf) create a singleton pattern that the docstrings correctly warn is not thread-safe. However, there's no locking mechanism to prevent race conditions if multiple threads accidentally call login() or close() simultaneously.

Consider adding a warning log if a concurrent call is detected, or document this limitation more prominently.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/garth/sso.py` around lines 19 - 26, The module-level singletons (_cm,
_browser, _page, _csrf) are not thread-safe and calls to login() or close() can
race; add a simple concurrency guard: introduce a module-level threading.Lock
(or RLock) and acquire it at the start of public entry points like login() and
close(), releasing it on exit (ensure release in finally), and if the lock
cannot be acquired immediately log a warning including which function was called
to surface concurrent use; update docstring to briefly state that this lock
serializes access but the module is still single-session only.
src/garth/http.py (2)

184-185: Mutable default argument headers={}.

Using a mutable default argument is a well-known Python antipattern. While the function doesn't modify headers currently, this could cause subtle bugs if the function is later modified to append headers.

Suggested fix
     def request(
         self,
         method: str,
         subdomain: str,
         path: str,
         /,
         api: bool = False,
         referrer: str | bool = False,
-        headers: dict = {},
+        headers: dict | None = None,
         **kwargs,
     ) -> _Response:
         ...
+        headers = headers or {}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/garth/http.py` around lines 184 - 185, The function signature currently
uses a mutable default argument headers: dict = {}, which can cause subtle bugs;
change the parameter to headers: Optional[dict] = None (import Optional from
typing if needed) and inside the function add a guard like if headers is None:
headers = {} so the function returns a fresh dict each call; keep the rest of
the behavior and **kwargs unchanged and update any type annotations or tests
that rely on the old signature.

288-288: Unused variable field_name.

The field_name is unpacked but not used. The JavaScript always uses 'file' as the FormData field name. If this is intentional (Garmin API requirement), prefix with underscore to suppress the warning. If the field name should be respected, pass it to the JavaScript.

Suggested fix (if intentional)
-        field_name, (filename, fp) = next(iter(files.items()))
+        _field_name, (filename, fp) = next(iter(files.items()))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/garth/http.py` at line 288, The unpacking of form-data in the files
handling (field_name, (filename, fp) = next(iter(files.items()))) produces an
unused variable field_name; either mark it as intentionally unused by renaming
to _field_name to suppress lint warnings, or propagate it to the client-side
FormData creation so the actual form field name is respected (i.e., replace the
hard-coded 'file' usage with the unpacked field_name when constructing the
FormData). Update the code in the files handling block (the files variable
extraction and the place where the JS/FormData field is set) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/garth/sso.py`:
- Around line 382-386: The current fragile UI detection uses _page.evaluate
checking document.body?.innerText?.includes("Activities") via the has_app
variable; replace this text-based check with a robust DOM or URL-based
detection: in the code where has_app is assigned (the _page.evaluate call in
src/garth/sso.py), query for a stable element selector (e.g.,
document.querySelector for a known id/class/aria attribute used by the
Activities app) or check the page URL/pattern via _page.url or a navigation
response instead of matching visible text, and update the conditional that uses
has_app to use the new existence/URL check.
- Around line 101-107: The code manually calls _cm.__enter__() on the Camoufox
context manager (symbols: Camoufox, _cm, _browser, _page) without guaranteeing
_cm.__exit__() runs if an exception (e.g., in login()) occurs, which can leak
the browser process; fix by converting this to a true context manager usage
(with Camoufox(headless=headless) as _browser: ...) or wrap the enter/usage in
try/finally where finally calls _cm.__exit__() or _cm.close()/cleanup, ensuring
any exception during login() triggers the cleanup path and that _browser/_page
are closed reliably.

In `@tests/conftest.py`:
- Around line 105-115: The mock_request function uses a mutable default argument
headers={}, which can lead to shared-state bugs; change the signature of
mock_request to use headers=None and inside the function initialize it (e.g., if
headers is None: headers = {}) before use, ensuring any modifications are local
to the call and not shared across invocations; update any callers or tests if
they relied on the previous default behavior.
- Around line 135-141: The current fallback that sets best_idx = 0 when best_idx
is None and remaining is truthy should be removed because it can return
unrelated cassette data; in the block handling cassette matching (referencing
best_idx and remaining) replace that fallback with a RuntimeError that includes
the requested method/path (or request object) and a summary of remaining
interactions so tests fail loudly and show what was requested vs. what cassettes
are available; alternatively, if a permissive behavior is desired, gate the
fallback behind an explicit option (e.g., allow_unmatched=True) and document it,
but do not silently default to the first remaining interaction.

In `@tests/test_http.py`:
- Around line 357-365: The test compares oauth1_mtime_before and
oauth1_mtime_after after calling client.refresh_oauth2() which is flaky because
filesystem mtime resolution may be coarse; instead read and compare the file
contents (or compute a checksum) at oauth1_path before and after calling
client.refresh_oauth2() to assert no change, or relax the assertion to allow
equal OR unchanged content — locate the variables oauth1_path and
oauth1_mtime_before and the call client.refresh_oauth2() in the test and replace
the mtime-based assertion with a content-based comparison (e.g., read file into
a bytes/string or compute hash) to ensure the test is robust across filesystems.

---

Nitpick comments:
In `@src/garth/http.py`:
- Around line 184-185: The function signature currently uses a mutable default
argument headers: dict = {}, which can cause subtle bugs; change the parameter
to headers: Optional[dict] = None (import Optional from typing if needed) and
inside the function add a guard like if headers is None: headers = {} so the
function returns a fresh dict each call; keep the rest of the behavior and
**kwargs unchanged and update any type annotations or tests that rely on the old
signature.
- Line 288: The unpacking of form-data in the files handling (field_name,
(filename, fp) = next(iter(files.items()))) produces an unused variable
field_name; either mark it as intentionally unused by renaming to _field_name to
suppress lint warnings, or propagate it to the client-side FormData creation so
the actual form field name is respected (i.e., replace the hard-coded 'file'
usage with the unpacked field_name when constructing the FormData). Update the
code in the files handling block (the files variable extraction and the place
where the JS/FormData field is set) accordingly.

In `@src/garth/sso.py`:
- Around line 19-26: The module-level singletons (_cm, _browser, _page, _csrf)
are not thread-safe and calls to login() or close() can race; add a simple
concurrency guard: introduce a module-level threading.Lock (or RLock) and
acquire it at the start of public entry points like login() and close(),
releasing it on exit (ensure release in finally), and if the lock cannot be
acquired immediately log a warning including which function was called to
surface concurrent use; update docstring to briefly state that this lock
serializes access but the module is still single-session only.

In `@src/garth/telemetry.py`:
- Around line 124-125: The empty model_post_init method is now dead code (since
_attached_sessions was removed); either delete the model_post_init method from
the class in src/garth/telemetry.py or replace its body with a brief comment
explaining why it remains (e.g., reserved for future hooks or
backward-compatibility with subclasses overriding model_post_init), and ensure
any subclasses or references expecting model_post_init are updated accordingly.

In `@tests/test_http.py`:
- Around line 80-106: Move the inline "import logging" out of the test functions
and add a single top-level "import logging" to the module; update the tests that
reference logging (e.g., test_load_browser_tokens_warns which uses
caplog.at_level and any other tests with inline imports) to remove their
internal "import logging" statements so they rely on the module-level import
instead.
- Around line 275-303: Replace the 5-tuple yield in the pytest fixture
garth_home_client with a NamedTuple (or dataclass) to make unpacking explicit:
define a GarthHomeFixture NamedTuple with fields client, tempdir, mock_oauth1,
mock_oauth2, sso_mod and change the fixture to yield GarthHomeFixture(client,
tempdir, mock_oauth1, mock_oauth2, garth.sso); update tests that unpack the
fixture to access attributes (e.g., fixture.client) instead of tuple indices to
improve readability and avoid unpacking errors.

In `@tests/test_sso.py`:
- Around line 81-90: Tests mutate module-level SSO globals via sso.close(),
causing potential flakiness; add an autouse fixture that calls sso.close()
before and after each test to guarantee clean state. Implement a pytest fixture
(e.g., _clean_sso_state) that runs autouse=True and invokes sso.close() before
yield and again after yield so tests like test_close_is_safe_before_login and
test_close_clears_all_state run with _browser/_cm/_page/_csrf cleared regardless
of order or parallel execution.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: a5ee5d9d-f97f-44eb-8f95-c04c62c74179

📥 Commits

Reviewing files that changed from the base of the PR and between f99159a and 26f9948.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (12)
  • pyproject.toml
  • src/garth/__init__.py
  • src/garth/data/_base.py
  • src/garth/exc.py
  • src/garth/http.py
  • src/garth/sso.py
  • src/garth/telemetry.py
  • tests/conftest.py
  • tests/test_cli.py
  • tests/test_http.py
  • tests/test_sso.py
  • tests/test_telemetry.py

…p, domain support, stricter cassette matching
@nrvim
Copy link
Copy Markdown
Author

nrvim commented Mar 31, 2026

Addressed all review feedback in 4c26f62:

  • Fixed mutable default headers={} in request() and conftest.py (now None with guard)
  • Added try/finally around browser __enter__() so browser cleans up on login failure
  • Fixed hardcoded connect.garmin.com to use client.domain (supports garmin.cn) in login(), _try_setup(), and initial navigation
  • Removed **kwargs from configure() so unknown keys raise instead of being silently swallowed
  • Fixed _upload_via_browser csrf type to str | None, renamed unused field_name to _field_name
  • Handle data as dict (URL-encode) and bytes (decode) in request(), not just str
  • Replaced fragile ?.includes("Activities") text check with URL-based redirect detection
  • Stricter cassette matching in conftest: raises with details instead of falling back to unrelated interaction
  • Fixed flaky mtime test: compare file content instead of filesystem timestamps
  • Added doseq=True to urlencode for list params

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/garth/sso.py (1)

348-419: Consider adding a timeout parameter for the login polling loop.

The 120-iteration loop with 1-second sleeps (2-minute timeout) is hardcoded. For users with slow connections or complex MFA flows, this might be insufficient.

♻️ Optional: Make timeout configurable
 def _do_login(
     email: str,
     password: str,
     prompt_mfa: Callable | None,
     domain: str = "garmin.com",
+    login_timeout: int = 120,
 ) -> None:
     ...
-    for _ in range(120):
+    for _ in range(login_timeout):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/garth/sso.py` around lines 348 - 419, The login polling loop currently
uses hardcoded iteration counts (120 and 60) and 1-second sleeps, which should
be made configurable; add a timeout parameter (e.g., poll_timeout_seconds
defaulting to 120) to the surrounding function (the code using _page,
connect_domain, sso_domain, prompt_mfa) and replace the fixed for-range loops
with time-based checks using time.time() deadline logic, and optionally derive
the post-MFA wait (previously 60s) from the same timeout or a separate
post_mfa_timeout parameter so slow connections/MFA flows can extend the wait
without changing the code.
src/garth/http.py (1)

70-81: Note: Telemetry HTTP hooks are now inactive.

With the switch to browser-based transport, the Telemetry class in telemetry.py can no longer hook into HTTP requests/responses. The callback mechanism and session tracking are effectively dead code. If telemetry for API calls is desired, it would need to be reimplemented within the _Response handling or the JavaScript fetch wrapper.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/garth/http.py` around lines 70 - 81, The Telemetry callback/session hooks
are now dead due to browser-based transport; update the code by removing or
deprecating the Telemetry class and its unused callbacks, or reimplement
telemetry inside the browser fetch path: either remove references to Telemetry
in Client (class Client in src/garth/http.py) and delete/mark
telemetry.Telemetry as unused, or move the hook points into the response wrapper
(class/structure named _Response) or into the JavaScript fetch wrapper that runs
in the headless page so telemetry is invoked on fetch completion; ensure any
session-tracking fields and callback registrars are cleaned up where they are
referenced to avoid stale dead code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/garth/http.py`:
- Around line 70-81: The Telemetry callback/session hooks are now dead due to
browser-based transport; update the code by removing or deprecating the
Telemetry class and its unused callbacks, or reimplement telemetry inside the
browser fetch path: either remove references to Telemetry in Client (class
Client in src/garth/http.py) and delete/mark telemetry.Telemetry as unused, or
move the hook points into the response wrapper (class/structure named _Response)
or into the JavaScript fetch wrapper that runs in the headless page so telemetry
is invoked on fetch completion; ensure any session-tracking fields and callback
registrars are cleaned up where they are referenced to avoid stale dead code.

In `@src/garth/sso.py`:
- Around line 348-419: The login polling loop currently uses hardcoded iteration
counts (120 and 60) and 1-second sleeps, which should be made configurable; add
a timeout parameter (e.g., poll_timeout_seconds defaulting to 120) to the
surrounding function (the code using _page, connect_domain, sso_domain,
prompt_mfa) and replace the fixed for-range loops with time-based checks using
time.time() deadline logic, and optionally derive the post-MFA wait (previously
60s) from the same timeout or a separate post_mfa_timeout parameter so slow
connections/MFA flows can extend the wait without changing the code.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 22e545b5-24ab-41f4-82a5-f5b3ec09d378

📥 Commits

Reviewing files that changed from the base of the PR and between 26f9948 and 4c26f62.

📒 Files selected for processing (4)
  • src/garth/http.py
  • src/garth/sso.py
  • tests/conftest.py
  • tests/test_http.py

@cyberjunky
Copy link
Copy Markdown
Contributor

@nrvim thanks, I got python-garminconnect working again using your code, I decided to integrate it without depending on garth, because it's unsure if matin takes up the towel again... I almost dropped it too... I do some more testing and release a new version.

@nrvim
Copy link
Copy Markdown
Author

nrvim commented Apr 1, 2026

@nrvim thanks, I got python-garminconnect working again using your code, I decided to integrate it without depending on garth, because it's unsure if matin takes up the towel again... I almost dropped it too... I do some more testing and release a new version.

nice, but please take a look in #217 there are new findings and might be easier than my PR. I think my PR is the last mile when everything else fails.

@arpanghosh8453
Copy link
Copy Markdown

@nrvim thanks, I got python-garminconnect working again using your code, I decided to integrate it without depending on garth, because it's unsure if matin takes up the towel again... I almost dropped it too... I do some more testing and release a new version.

@cyberjunky I think @matin is busy with his own life as he decided and he did so much for this project! huge respect to him! I am so excited to hear that it's working for you again, just waiting for better news!

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 1, 2026

Codecov Report

❌ Patch coverage is 69.32849% with 169 lines in your changes missing coverage. Please review.
✅ Project coverage is 94.07%. Comparing base (f99159a) to head (4c26f62).

Files with missing lines Patch % Lines
src/garth/sso.py 21.42% 132 Missing ⚠️
src/garth/http.py 71.27% 27 Missing ⚠️
tests/conftest.py 89.74% 8 Missing ⚠️
tests/test_cli.py 90.00% 1 Missing ⚠️
tests/test_http.py 99.19% 1 Missing ⚠️

❗ There is a different number of reports uploaded between BASE (f99159a) and HEAD (4c26f62). Click for more details.

HEAD has 15 uploads less than BASE
Flag BASE (f99159a) HEAD (4c26f62)
unittests 30 15
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #225      +/-   ##
==========================================
- Coverage   99.88%   94.07%   -5.81%     
==========================================
  Files          68       68              
  Lines        3571     3361     -210     
==========================================
- Hits         3567     3162     -405     
- Misses          4      199     +195     
Flag Coverage Δ
unittests 94.07% <69.32%> (-5.81%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@nrvim nrvim marked this pull request as draft April 1, 2026 19:46
@nrvim nrvim marked this pull request as draft April 1, 2026 19:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants