Revive garth: replace HTTP transport with Camoufox browser automation#225
Revive garth: replace HTTP transport with Camoufox browser automation#225nrvim wants to merge 2 commits intomatin:mainfrom
Conversation
WalkthroughReplaces requests-based HTTP/SSO with a browser-driven Camoufox/Playwright transport, removes concurrent Data.list fetching (MAX_WORKERS→1), strips telemetry session hooks, updates dependencies, and migrates tests from VCR to a cassette replay approach. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant Browser as Browser (Camoufox/Playwright)
participant Server as Garmin Server
rect rgba(100, 150, 200, 0.5)
Note over Client,Browser: Browser-based SSO/Login
Client->>Browser: launch()/navigate()
Browser->>Server: GET SSO page
Server-->>Browser: login form + CSRF
Browser->>Browser: fill/submit credentials (interactive MFA if needed)
Browser->>Server: POST credentials / MFA
Server-->>Browser: set session cookies
Browser->>Browser: extract CSRF & create placeholder tokens
Browser-->>Client: return browser-backed token placeholders
end
rect rgba(150, 200, 100, 0.5)
Note over Client,Server: Authenticated Request via Browser fetch
Client->>Browser: evaluate(fetch("/gc-api/..." + params), headers)
Browser->>Server: HTTP request with cookies/CSRF
Server-->>Browser: response (JSON/binary)
Browser-->>Client: base64 content / JSON wrapped in _Response
Client->>Client: decode/process response
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Revives garth by replacing the broken HTTP-based transport/authentication flow with Camoufox/Playwright browser automation, routing API calls through page.evaluate(fetch(...)) and updating persistence + tests accordingly.
Changes:
- Rewrote SSO + HTTP transport to use a real browser session (cookies/CSRF), including binary download/upload support.
- Adjusted data fetching to be sequential (no thread pool) due to Playwright thread constraints, and simplified telemetry to remove
requestscoupling. - Updated dependencies and rewired tests from
pytest-vcrinterception to YAML cassette replay.
Reviewed changes
Copilot reviewed 12 out of 13 changed files in this pull request and generated 11 comments.
Show a summary per file
| File | Description |
|---|---|
uv.lock |
Locks new Camoufox/Playwright dependency graph and removes oauth/VCR-related packages. |
pyproject.toml |
Swaps runtime deps to camoufox/playwright, updates status classifier, adjusts test deps. |
src/garth/sso.py |
Implements browser-driven login, session cookie persistence, CSRF extraction, and lifecycle helpers. |
src/garth/http.py |
Implements browser-based request pipeline via page.evaluate(fetch(...)), plus upload/download helpers and token persistence warnings. |
src/garth/data/_base.py |
Removes ThreadPoolExecutor usage and fetches date ranges sequentially. |
src/garth/telemetry.py |
Removes requests hook integration while keeping sanitization + optional logfire reporting. |
src/garth/exc.py |
Generalizes GarthHTTPError.error type away from requests.HTTPError. |
src/garth/__init__.py |
Removes deprecation warning and exports close for browser cleanup. |
tests/conftest.py |
Replaces VCR interception with YAML cassette replay by monkeypatching Client.request() for @pytest.mark.vcr tests. |
tests/test_sso.py |
Updates SSO tests for placeholder tokens/sentinels and browser lifecycle helpers. |
tests/test_http.py |
Updates HTTP tests for browser session warnings, unsupported config warnings, upload/download behavior, and refresh persistence. |
tests/test_telemetry.py |
Simplifies telemetry tests to focus on sanitization + basic configuration. |
tests/test_cli.py |
Switches CLI login test to mocking sso.login() instead of VCR replay. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| def request( | ||
| self, | ||
| method: str, | ||
| subdomain: str, | ||
| path: str, | ||
| /, | ||
| api: bool = False, | ||
| referrer: str | bool = False, | ||
| headers: dict = {}, | ||
| **kwargs, | ||
| ) -> Response: | ||
| url = f"https://{subdomain}.{self.domain}" | ||
| url = urljoin(url, path) | ||
| if referrer is True and self.last_resp: | ||
| headers["referer"] = self.last_resp.url | ||
| if api: | ||
| assert self.oauth1_token, ( | ||
| "OAuth1 token is required for API requests" | ||
| ) -> _Response: |
There was a problem hiding this comment.
Client.request() uses a mutable default argument (headers: dict = {}). This can leak headers between calls if the dict is mutated. Use headers: dict | None = None and create a new dict inside the method.
| def request( | ||
| self, | ||
| method: str, | ||
| subdomain: str, | ||
| path: str, | ||
| /, | ||
| api: bool = False, | ||
| referrer: str | bool = False, | ||
| headers: dict = {}, | ||
| **kwargs, | ||
| ) -> Response: | ||
| url = f"https://{subdomain}.{self.domain}" | ||
| url = urljoin(url, path) | ||
| if referrer is True and self.last_resp: | ||
| headers["referer"] = self.last_resp.url | ||
| if api: | ||
| assert self.oauth1_token, ( | ||
| "OAuth1 token is required for API requests" | ||
| ) -> _Response: | ||
| """Make an API request through the browser. | ||
|
|
||
| All requests are routed through ``page.evaluate(fetch(...))`` | ||
| in the Camoufox browser, bypassing Cloudflare. | ||
| """ | ||
| page = sso.get_page() | ||
| csrf = sso.get_csrf() | ||
|
|
||
| if not page: | ||
| raise GarthException( | ||
| msg="Not logged in — call login() first" | ||
| ) | ||
| if ( | ||
| not isinstance(self.oauth2_token, OAuth2Token) | ||
| or self.oauth2_token.expired | ||
| ): | ||
| self.refresh_oauth2() | ||
| headers["Authorization"] = str(self.oauth2_token) | ||
| self.last_resp = self.sess.request( | ||
| method, | ||
|
|
||
| url = f"/gc-api/{path.lstrip('/')}" | ||
|
|
||
| params = kwargs.get("params") | ||
| if params: | ||
| from urllib.parse import urlencode | ||
| url += "?" + urlencode(params) | ||
|
|
There was a problem hiding this comment.
Client.request() currently ignores the subdomain, api, and referrer parameters but still exposes them in the public API. This is a behavioral change (e.g., client.get("connect", "/") will now fetch /gc-api/ and likely 404). Either implement subdomain-specific routing (at least for the previously supported subdomains) or explicitly reject/ warn when unsupported values are passed to avoid silent breakage.
| json_body = kwargs.get("json") | ||
| data = kwargs.get("data") | ||
|
|
||
| # Handle file uploads | ||
| files = kwargs.get("files") | ||
| if files: | ||
| if len(files) > 1: | ||
| raise GarthException( | ||
| msg="Only single file upload is supported" | ||
| ) | ||
| return self._upload_via_browser( | ||
| page, csrf, url, files | ||
| ) | ||
|
|
||
| result = page.evaluate(""" | ||
| async ([url, method, jsonBody, formData, csrf, | ||
| extraHeaders]) => { | ||
| const h = { | ||
| 'connect-csrf-token': csrf || '', | ||
| 'Accept': 'application/json' | ||
| }; | ||
| if (extraHeaders) { | ||
| Object.assign(h, extraHeaders); | ||
| } | ||
| let body = undefined; | ||
| if (jsonBody) { | ||
| h['Content-Type'] = 'application/json'; | ||
| body = JSON.stringify(jsonBody); | ||
| } else if (formData) { | ||
| h['Content-Type'] = | ||
| 'application/x-www-form-urlencoded'; | ||
| body = formData; | ||
| } | ||
| try { | ||
| const resp = await fetch(url, { | ||
| method: method || 'GET', | ||
| credentials: 'include', | ||
| headers: h, | ||
| body: body | ||
| }); | ||
| const text = await resp.text(); | ||
| return { | ||
| status: resp.status, | ||
| text: text, | ||
| url: resp.url | ||
| }; | ||
| } catch(e) { | ||
| return {status: 0, error: e.message}; | ||
| } | ||
| } | ||
| """, [ | ||
| url, | ||
| headers=headers, | ||
| timeout=self.timeout, | ||
| **kwargs, | ||
| ) | ||
| try: | ||
| self.last_resp.raise_for_status() | ||
| except HTTPError as e: | ||
| method.upper(), | ||
| json_body, | ||
| data if isinstance(data, str) else None, | ||
| csrf, | ||
| headers if headers else None, | ||
| ]) |
There was a problem hiding this comment.
Client.request() only forwards data when it is a pre-encoded str (otherwise it sends null). Previously callers could pass a dict/bytes and have it encoded by the HTTP client. To preserve compatibility, consider accepting dict[str, str] (URL-encode it) and bytes (base64 or ArrayBuffer) in addition to str.
| def _upload_via_browser( | ||
| self, page, csrf: str, url: str, files: dict | ||
| ) -> _Response: | ||
| """Upload a single file via browser FormData.""" | ||
| field_name, (filename, fp) = next(iter(files.items())) | ||
| file_bytes = fp.read() | ||
| b64_data = base64.b64encode(file_bytes).decode() |
There was a problem hiding this comment.
_upload_via_browser() declares csrf: str, but callers pass through sso.get_csrf() which can be None (before _try_setup() succeeds). Update the type to str | None (or coerce to "" at the call site) to reflect actual values and avoid type-checker/runtime issues if code starts relying on string methods.
| def _upload_via_browser( | ||
| self, page, csrf: str, url: str, files: dict | ||
| ) -> _Response: | ||
| """Upload a single file via browser FormData.""" | ||
| field_name, (filename, fp) = next(iter(files.items())) | ||
| file_bytes = fp.read() | ||
| b64_data = base64.b64encode(file_bytes).decode() | ||
|
|
||
| result = page.evaluate(""" | ||
| async ([url, b64Data, fileName, csrf]) => { | ||
| const resp = await fetch( | ||
| 'data:application/octet-stream;base64,' | ||
| + b64Data | ||
| ); | ||
| const blob = await resp.blob(); | ||
| const formData = new FormData(); | ||
| formData.append('file', blob, fileName); | ||
| try { |
There was a problem hiding this comment.
In _upload_via_browser(), field_name is unpacked but never used, which is misleading (it looks like the caller-specified field name is respected, but the JS always appends under 'file'). Either use field_name when building the FormData, or explicitly ignore it (e.g., _field_name) and document that the field name is fixed.
src/garth/sso.py
Outdated
| # Navigate to Garmin Connect | ||
| try: | ||
| _page.goto( | ||
| "https://connect.garmin.com/modern/", | ||
| wait_until="domcontentloaded", | ||
| timeout=30000, | ||
| ) | ||
| except Exception as e: | ||
| log.debug("Initial navigation error (may be expected): %s", e) | ||
| time.sleep(3) | ||
|
|
||
| # Check if session is valid | ||
| url = _page.url | ||
| needs_login = "sso.garmin.com" in url or not _try_setup() | ||
|
|
There was a problem hiding this comment.
Browser navigation is hardcoded to https://connect.garmin.com/modern/. This will break client.domain == "garmin.cn" (and any future domains), even though _do_login() attempts to support garmin.cn. Use client.domain to derive the correct Connect host for the initial navigation and session validation (and keep it consistent with _do_login()).
src/garth/sso.py
Outdated
| def _try_setup() -> bool: | ||
| """Extract CSRF token and verify session is valid.""" | ||
| global _csrf | ||
| current = _page.url | ||
| if "/modern/" not in current: | ||
| try: | ||
| _page.goto( | ||
| "https://connect.garmin.com/modern/", | ||
| wait_until="domcontentloaded", | ||
| ) | ||
| except Exception as e: |
There was a problem hiding this comment.
_try_setup() navigates to https://connect.garmin.com/modern/ when the current URL isn't under /modern/. Like the initial navigation in login(), this hardcodes the Connect host and will break non-.com domains (e.g., garmin.cn). Consider storing the resolved Connect base URL when launching the browser and reusing it here.
src/garth/sso.py
Outdated
| _browser = _cm.__enter__() | ||
| _page = _browser.new_page() | ||
| context = _page.context | ||
|
|
||
| # Load saved session cookies | ||
| if session_file.exists(): | ||
| try: | ||
| session = json.loads(session_file.read_text()) | ||
| age_days = ( | ||
| time.time() - session.get("saved_at", 0) | ||
| ) / 86400 | ||
| if age_days < 364 and session.get("cookies"): | ||
| context.add_cookies(session["cookies"]) | ||
| log.info( | ||
| "Loaded session cookies (%.0f days old)", age_days | ||
| ) | ||
| except Exception as e: | ||
| log.debug("Could not load session cookies: %s", e) |
There was a problem hiding this comment.
login() manually calls _cm.__enter__() but doesn't guarantee cleanup if an exception is raised mid-login (e.g., navigation timeout, selector change). Wrapping the login flow in try/except with close() in a finally (or using with Camoufox(...) as browser:) would prevent leaking a running browser process on failures.
| _browser = _cm.__enter__() | |
| _page = _browser.new_page() | |
| context = _page.context | |
| # Load saved session cookies | |
| if session_file.exists(): | |
| try: | |
| session = json.loads(session_file.read_text()) | |
| age_days = ( | |
| time.time() - session.get("saved_at", 0) | |
| ) / 86400 | |
| if age_days < 364 and session.get("cookies"): | |
| context.add_cookies(session["cookies"]) | |
| log.info( | |
| "Loaded session cookies (%.0f days old)", age_days | |
| ) | |
| except Exception as e: | |
| log.debug("Could not load session cookies: %s", e) | |
| try: | |
| _browser = _cm.__enter__() | |
| _page = _browser.new_page() | |
| context = _page.context | |
| # Load saved session cookies | |
| if session_file.exists(): | |
| try: | |
| session = json.loads(session_file.read_text()) | |
| age_days = ( | |
| time.time() - session.get("saved_at", 0) | |
| ) / 86400 | |
| if age_days < 364 and session.get("cookies"): | |
| context.add_cookies(session["cookies"]) | |
| log.info( | |
| "Loaded session cookies (%.0f days old)", age_days | |
| ) | |
| except Exception as e: | |
| log.debug("Could not load session cookies: %s", e) | |
| except Exception: | |
| # Ensure that any partially-initialized browser is cleaned up | |
| close() | |
| raise |
tests/conftest.py
Outdated
| headers={}, | ||
| **kwargs, | ||
| ): |
There was a problem hiding this comment.
mock_request() uses mutable default arguments (headers={}), which can leak state between calls if mutated. Use headers=None and normalize inside the function.
| headers={}, | |
| **kwargs, | |
| ): | |
| headers=None, | |
| **kwargs, | |
| ): | |
| headers = {} if headers is None else headers |
tests/conftest.py
Outdated
| if best_idx is None and remaining: | ||
| best_idx = 0 | ||
|
|
||
| if best_idx is None: | ||
| raise RuntimeError( | ||
| "No cassette interactions remaining" | ||
| ) |
There was a problem hiding this comment.
Cassette replay fallback if best_idx is None and remaining: best_idx = 0 will silently return an unrelated interaction when no match is found, which can hide real regressions and make tests non-deterministic. It would be safer to raise with a helpful error message (e.g., expected method/path and a list of remaining cassette URIs) when no match is found.
| if best_idx is None and remaining: | |
| best_idx = 0 | |
| if best_idx is None: | |
| raise RuntimeError( | |
| "No cassette interactions remaining" | |
| ) | |
| if best_idx is None: | |
| expected = f"{method.upper()} {match_path}" | |
| if remaining: | |
| remaining_desc = ", ".join( | |
| f"{i}: {r['request'].get('method', 'GET')} {r['request'].get('uri', '')}" | |
| for i, r in enumerate(remaining) | |
| ) | |
| msg = ( | |
| "No matching cassette interaction found for " | |
| f"{expected}. Remaining interactions: {remaining_desc}" | |
| ) | |
| else: | |
| msg = ( | |
| "No cassette interactions remaining; expected " | |
| f"{expected}" | |
| ) | |
| raise RuntimeError(msg) |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (7)
tests/test_sso.py (1)
81-90: Consider test isolation for global state tests.
test_close_is_safe_before_loginandtest_close_clears_all_stateboth callsso.close()which mutates module-level globals (_browser,_cm,_page,_csrf). If tests run in parallel or in different orders, this could cause flakiness. Consider adding a fixture that ensures clean state before/after these tests.♻️ Add a fixture to ensure clean state
`@pytest.fixture`(autouse=True) def _clean_sso_state(): """Ensure SSO module state is clean before and after each test.""" sso.close() yield sso.close()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_sso.py` around lines 81 - 90, Tests mutate module-level SSO globals via sso.close(), causing potential flakiness; add an autouse fixture that calls sso.close() before and after each test to guarantee clean state. Implement a pytest fixture (e.g., _clean_sso_state) that runs autouse=True and invokes sso.close() before yield and again after yield so tests like test_close_is_safe_before_login and test_close_clears_all_state run with _browser/_cm/_page/_csrf cleared regardless of order or parallel execution.src/garth/telemetry.py (1)
124-125: Consider removing the emptymodel_post_initor adding a comment.With
_attached_sessionsremoved,model_post_initnow does nothing. Either remove it entirely or add a brief comment explaining why it's kept (e.g., for future extension).♻️ Option 1: Remove empty method
- def model_post_init(self, __context): - pass🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/garth/telemetry.py` around lines 124 - 125, The empty model_post_init method is now dead code (since _attached_sessions was removed); either delete the model_post_init method from the class in src/garth/telemetry.py or replace its body with a brief comment explaining why it remains (e.g., reserved for future hooks or backward-compatibility with subclasses overriding model_post_init), and ensure any subclasses or references expecting model_post_init are updated accordingly.tests/test_http.py (2)
80-106: Import statements inside test functions.The
loggingimport at lines 99 and 124 is placed inside the test functions rather than at the module top. While functional, this is unconventional. Consider moving it to the top-level imports for consistency.Suggested fix
Add to top of file:
import loggingThen remove the inline imports at lines 99 and 124.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_http.py` around lines 80 - 106, Move the inline "import logging" out of the test functions and add a single top-level "import logging" to the module; update the tests that reference logging (e.g., test_load_browser_tokens_warns which uses caplog.at_level and any other tests with inline imports) to remove their internal "import logging" statements so they rely on the module-level import instead.
275-303: Consider using a NamedTuple for fixture return value.The fixture yields a 5-element tuple that is unpacked in tests. Using a
NamedTupleor dataclass would improve readability and reduce the risk of unpacking errors.Example improvement
from typing import NamedTuple class GarthHomeFixture(NamedTuple): client: Client tempdir: str mock_oauth1: OAuth1Token mock_oauth2: OAuth2Token sso_mod: Any `@pytest.fixture` def garth_home_client(monkeypatch: pytest.MonkeyPatch): # ... setup code ... yield GarthHomeFixture(client, tempdir, mock_oauth1, mock_oauth2, garth.sso)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/test_http.py` around lines 275 - 303, Replace the 5-tuple yield in the pytest fixture garth_home_client with a NamedTuple (or dataclass) to make unpacking explicit: define a GarthHomeFixture NamedTuple with fields client, tempdir, mock_oauth1, mock_oauth2, sso_mod and change the fixture to yield GarthHomeFixture(client, tempdir, mock_oauth1, mock_oauth2, garth.sso); update tests that unpack the fixture to access attributes (e.g., fixture.client) instead of tuple indices to improve readability and avoid unpacking errors.src/garth/sso.py (1)
19-26: Module-level singleton state is documented but risky.The module-level globals (
_cm,_browser,_page,_csrf) create a singleton pattern that the docstrings correctly warn is not thread-safe. However, there's no locking mechanism to prevent race conditions if multiple threads accidentally calllogin()orclose()simultaneously.Consider adding a warning log if a concurrent call is detected, or document this limitation more prominently.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/garth/sso.py` around lines 19 - 26, The module-level singletons (_cm, _browser, _page, _csrf) are not thread-safe and calls to login() or close() can race; add a simple concurrency guard: introduce a module-level threading.Lock (or RLock) and acquire it at the start of public entry points like login() and close(), releasing it on exit (ensure release in finally), and if the lock cannot be acquired immediately log a warning including which function was called to surface concurrent use; update docstring to briefly state that this lock serializes access but the module is still single-session only.src/garth/http.py (2)
184-185: Mutable default argumentheaders={}.Using a mutable default argument is a well-known Python antipattern. While the function doesn't modify
headerscurrently, this could cause subtle bugs if the function is later modified to append headers.Suggested fix
def request( self, method: str, subdomain: str, path: str, /, api: bool = False, referrer: str | bool = False, - headers: dict = {}, + headers: dict | None = None, **kwargs, ) -> _Response: ... + headers = headers or {}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/garth/http.py` around lines 184 - 185, The function signature currently uses a mutable default argument headers: dict = {}, which can cause subtle bugs; change the parameter to headers: Optional[dict] = None (import Optional from typing if needed) and inside the function add a guard like if headers is None: headers = {} so the function returns a fresh dict each call; keep the rest of the behavior and **kwargs unchanged and update any type annotations or tests that rely on the old signature.
288-288: Unused variablefield_name.The
field_nameis unpacked but not used. The JavaScript always uses'file'as the FormData field name. If this is intentional (Garmin API requirement), prefix with underscore to suppress the warning. If the field name should be respected, pass it to the JavaScript.Suggested fix (if intentional)
- field_name, (filename, fp) = next(iter(files.items())) + _field_name, (filename, fp) = next(iter(files.items()))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/garth/http.py` at line 288, The unpacking of form-data in the files handling (field_name, (filename, fp) = next(iter(files.items()))) produces an unused variable field_name; either mark it as intentionally unused by renaming to _field_name to suppress lint warnings, or propagate it to the client-side FormData creation so the actual form field name is respected (i.e., replace the hard-coded 'file' usage with the unpacked field_name when constructing the FormData). Update the code in the files handling block (the files variable extraction and the place where the JS/FormData field is set) accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/garth/sso.py`:
- Around line 382-386: The current fragile UI detection uses _page.evaluate
checking document.body?.innerText?.includes("Activities") via the has_app
variable; replace this text-based check with a robust DOM or URL-based
detection: in the code where has_app is assigned (the _page.evaluate call in
src/garth/sso.py), query for a stable element selector (e.g.,
document.querySelector for a known id/class/aria attribute used by the
Activities app) or check the page URL/pattern via _page.url or a navigation
response instead of matching visible text, and update the conditional that uses
has_app to use the new existence/URL check.
- Around line 101-107: The code manually calls _cm.__enter__() on the Camoufox
context manager (symbols: Camoufox, _cm, _browser, _page) without guaranteeing
_cm.__exit__() runs if an exception (e.g., in login()) occurs, which can leak
the browser process; fix by converting this to a true context manager usage
(with Camoufox(headless=headless) as _browser: ...) or wrap the enter/usage in
try/finally where finally calls _cm.__exit__() or _cm.close()/cleanup, ensuring
any exception during login() triggers the cleanup path and that _browser/_page
are closed reliably.
In `@tests/conftest.py`:
- Around line 105-115: The mock_request function uses a mutable default argument
headers={}, which can lead to shared-state bugs; change the signature of
mock_request to use headers=None and inside the function initialize it (e.g., if
headers is None: headers = {}) before use, ensuring any modifications are local
to the call and not shared across invocations; update any callers or tests if
they relied on the previous default behavior.
- Around line 135-141: The current fallback that sets best_idx = 0 when best_idx
is None and remaining is truthy should be removed because it can return
unrelated cassette data; in the block handling cassette matching (referencing
best_idx and remaining) replace that fallback with a RuntimeError that includes
the requested method/path (or request object) and a summary of remaining
interactions so tests fail loudly and show what was requested vs. what cassettes
are available; alternatively, if a permissive behavior is desired, gate the
fallback behind an explicit option (e.g., allow_unmatched=True) and document it,
but do not silently default to the first remaining interaction.
In `@tests/test_http.py`:
- Around line 357-365: The test compares oauth1_mtime_before and
oauth1_mtime_after after calling client.refresh_oauth2() which is flaky because
filesystem mtime resolution may be coarse; instead read and compare the file
contents (or compute a checksum) at oauth1_path before and after calling
client.refresh_oauth2() to assert no change, or relax the assertion to allow
equal OR unchanged content — locate the variables oauth1_path and
oauth1_mtime_before and the call client.refresh_oauth2() in the test and replace
the mtime-based assertion with a content-based comparison (e.g., read file into
a bytes/string or compute hash) to ensure the test is robust across filesystems.
---
Nitpick comments:
In `@src/garth/http.py`:
- Around line 184-185: The function signature currently uses a mutable default
argument headers: dict = {}, which can cause subtle bugs; change the parameter
to headers: Optional[dict] = None (import Optional from typing if needed) and
inside the function add a guard like if headers is None: headers = {} so the
function returns a fresh dict each call; keep the rest of the behavior and
**kwargs unchanged and update any type annotations or tests that rely on the old
signature.
- Line 288: The unpacking of form-data in the files handling (field_name,
(filename, fp) = next(iter(files.items()))) produces an unused variable
field_name; either mark it as intentionally unused by renaming to _field_name to
suppress lint warnings, or propagate it to the client-side FormData creation so
the actual form field name is respected (i.e., replace the hard-coded 'file'
usage with the unpacked field_name when constructing the FormData). Update the
code in the files handling block (the files variable extraction and the place
where the JS/FormData field is set) accordingly.
In `@src/garth/sso.py`:
- Around line 19-26: The module-level singletons (_cm, _browser, _page, _csrf)
are not thread-safe and calls to login() or close() can race; add a simple
concurrency guard: introduce a module-level threading.Lock (or RLock) and
acquire it at the start of public entry points like login() and close(),
releasing it on exit (ensure release in finally), and if the lock cannot be
acquired immediately log a warning including which function was called to
surface concurrent use; update docstring to briefly state that this lock
serializes access but the module is still single-session only.
In `@src/garth/telemetry.py`:
- Around line 124-125: The empty model_post_init method is now dead code (since
_attached_sessions was removed); either delete the model_post_init method from
the class in src/garth/telemetry.py or replace its body with a brief comment
explaining why it remains (e.g., reserved for future hooks or
backward-compatibility with subclasses overriding model_post_init), and ensure
any subclasses or references expecting model_post_init are updated accordingly.
In `@tests/test_http.py`:
- Around line 80-106: Move the inline "import logging" out of the test functions
and add a single top-level "import logging" to the module; update the tests that
reference logging (e.g., test_load_browser_tokens_warns which uses
caplog.at_level and any other tests with inline imports) to remove their
internal "import logging" statements so they rely on the module-level import
instead.
- Around line 275-303: Replace the 5-tuple yield in the pytest fixture
garth_home_client with a NamedTuple (or dataclass) to make unpacking explicit:
define a GarthHomeFixture NamedTuple with fields client, tempdir, mock_oauth1,
mock_oauth2, sso_mod and change the fixture to yield GarthHomeFixture(client,
tempdir, mock_oauth1, mock_oauth2, garth.sso); update tests that unpack the
fixture to access attributes (e.g., fixture.client) instead of tuple indices to
improve readability and avoid unpacking errors.
In `@tests/test_sso.py`:
- Around line 81-90: Tests mutate module-level SSO globals via sso.close(),
causing potential flakiness; add an autouse fixture that calls sso.close()
before and after each test to guarantee clean state. Implement a pytest fixture
(e.g., _clean_sso_state) that runs autouse=True and invokes sso.close() before
yield and again after yield so tests like test_close_is_safe_before_login and
test_close_clears_all_state run with _browser/_cm/_page/_csrf cleared regardless
of order or parallel execution.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a5ee5d9d-f97f-44eb-8f95-c04c62c74179
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (12)
pyproject.tomlsrc/garth/__init__.pysrc/garth/data/_base.pysrc/garth/exc.pysrc/garth/http.pysrc/garth/sso.pysrc/garth/telemetry.pytests/conftest.pytests/test_cli.pytests/test_http.pytests/test_sso.pytests/test_telemetry.py
…p, domain support, stricter cassette matching
|
Addressed all review feedback in 4c26f62:
|
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/garth/sso.py (1)
348-419: Consider adding a timeout parameter for the login polling loop.The 120-iteration loop with 1-second sleeps (2-minute timeout) is hardcoded. For users with slow connections or complex MFA flows, this might be insufficient.
♻️ Optional: Make timeout configurable
def _do_login( email: str, password: str, prompt_mfa: Callable | None, domain: str = "garmin.com", + login_timeout: int = 120, ) -> None: ... - for _ in range(120): + for _ in range(login_timeout):🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/garth/sso.py` around lines 348 - 419, The login polling loop currently uses hardcoded iteration counts (120 and 60) and 1-second sleeps, which should be made configurable; add a timeout parameter (e.g., poll_timeout_seconds defaulting to 120) to the surrounding function (the code using _page, connect_domain, sso_domain, prompt_mfa) and replace the fixed for-range loops with time-based checks using time.time() deadline logic, and optionally derive the post-MFA wait (previously 60s) from the same timeout or a separate post_mfa_timeout parameter so slow connections/MFA flows can extend the wait without changing the code.src/garth/http.py (1)
70-81: Note: Telemetry HTTP hooks are now inactive.With the switch to browser-based transport, the
Telemetryclass intelemetry.pycan no longer hook into HTTP requests/responses. The callback mechanism and session tracking are effectively dead code. If telemetry for API calls is desired, it would need to be reimplemented within the_Responsehandling or the JavaScript fetch wrapper.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/garth/http.py` around lines 70 - 81, The Telemetry callback/session hooks are now dead due to browser-based transport; update the code by removing or deprecating the Telemetry class and its unused callbacks, or reimplement telemetry inside the browser fetch path: either remove references to Telemetry in Client (class Client in src/garth/http.py) and delete/mark telemetry.Telemetry as unused, or move the hook points into the response wrapper (class/structure named _Response) or into the JavaScript fetch wrapper that runs in the headless page so telemetry is invoked on fetch completion; ensure any session-tracking fields and callback registrars are cleaned up where they are referenced to avoid stale dead code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/garth/http.py`:
- Around line 70-81: The Telemetry callback/session hooks are now dead due to
browser-based transport; update the code by removing or deprecating the
Telemetry class and its unused callbacks, or reimplement telemetry inside the
browser fetch path: either remove references to Telemetry in Client (class
Client in src/garth/http.py) and delete/mark telemetry.Telemetry as unused, or
move the hook points into the response wrapper (class/structure named _Response)
or into the JavaScript fetch wrapper that runs in the headless page so telemetry
is invoked on fetch completion; ensure any session-tracking fields and callback
registrars are cleaned up where they are referenced to avoid stale dead code.
In `@src/garth/sso.py`:
- Around line 348-419: The login polling loop currently uses hardcoded iteration
counts (120 and 60) and 1-second sleeps, which should be made configurable; add
a timeout parameter (e.g., poll_timeout_seconds defaulting to 120) to the
surrounding function (the code using _page, connect_domain, sso_domain,
prompt_mfa) and replace the fixed for-range loops with time-based checks using
time.time() deadline logic, and optionally derive the post-MFA wait (previously
60s) from the same timeout or a separate post_mfa_timeout parameter so slow
connections/MFA flows can extend the wait without changing the code.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 22e545b5-24ab-41f4-82a5-f5b3ec09d378
📒 Files selected for processing (4)
src/garth/http.pysrc/garth/sso.pytests/conftest.pytests/test_http.py
|
@nrvim thanks, I got python-garminconnect working again using your code, I decided to integrate it without depending on garth, because it's unsure if matin takes up the towel again... I almost dropped it too... I do some more testing and release a new version. |
nice, but please take a look in #217 there are new findings and might be easier than my PR. I think my PR is the last mile when everything else fails. |
@cyberjunky I think @matin is busy with his own life as he decided and he did so much for this project! huge respect to him! I am so excited to hear that it's working for you again, just waiting for better news! |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #225 +/- ##
==========================================
- Coverage 99.88% 94.07% -5.81%
==========================================
Files 68 68
Lines 3571 3361 -210
==========================================
- Hits 3567 3162 -405
- Misses 4 199 +195
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Summary
This PR revives garth by replacing the broken HTTP transport with Camoufox browser automation. Garmin made multiple changes that broke all Python access: the SSO login endpoint (
/mobile/api/login) was disabled and returns 400, the OAuth token endpoint (/oauth-service/oauth/preauthorized) was disabled and returns 401, and aggressive Cloudflare bot detection was deployed that blocks every Python HTTP client includingrequests,httpx, andcurl_cffi, even with Chrome TLS impersonation. Extracting cookies from a browser and replaying them in Python also fails. There is no HTTP-level fix.The only working method is
page.evaluate(fetch(...))inside a real browser. This PR implements that.Why
garth was deprecated on March 28, 2026 because Garmin's changes made it unfixable at the HTTP level. But garth is not just a library. It's the foundation of an ecosystem.
python-garminconnect with 1.7K stars depends on garth for auth. The Home Assistant Garmin integration with 452 stars depends on python-garminconnect. GarminDB with 3K stars, garminexport with 564 stars, garmin-connect-export with 486 stars are all broken. Dozens of MCP servers, fitness tools, and personal health projects are all dead.
The maintainers of these projects didn't choose to break. Garmin forced it. This PR gives them a path back.
What changed
Source files (7 files)
sso.py~/.garth/with roughly 1 year lifetime.http.pyClient.request()routes throughpage.evaluate(fetch(...)). Binary download via FileReader/blob. Upload via FormData.data/_base.pyData.list()runs sequentially instead ofThreadPoolExecutorbecause Playwright cannot be called from multiple threads.exc.pyGarthHTTPError.errorchanged fromrequests.HTTPErrorto baseException.telemetry.pyrequests.Session/Responsedependency. Sanitization utilities preserved.__init__.pycloseexport for browser cleanup.pyproject.tomlrequestsandrequests-oauthlibtocamoufoxandplaywright. Status changed from Inactive to Beta.Test files (5 files)
conftest.pytest_sso.pytest_http.pytest_telemetry.pytest_cli.pysso.login()instead of VCR replay.Not changed
All data models in the
data/directory are untouched. All stats classes in thestats/directory are untouched.auth_tokens.py,utils.py,cli.py, and theusers/module are all untouched. All cassette YAML files are preserved as test data.How it works
First,
garth.login(email, password)launches a headless Camoufox browser, navigates to Garmin SSO, fills the login form, and handles MFA if needed. Then saves session cookies.After that,
Client.request()routes all API calls throughpage.evaluate(fetch(...)), which is JavaScriptfetch()running inside the browser context.Session cookies persist to
~/.garth/browser_session.jsonfor roughly 1 year. Subsequent runs restore cookies so no login or MFA is needed.What I tested before submitting
I tested every approach before landing on browser automation:
requestswith cookiescurl_cffiwith Chrome TLS impersonationhttpxwith extracted browser cookies/mobile/api/login)page.evaluate(fetch(...))in CamoufoxTest results
137 unit tests passing, down from 163 because tests for dead HTTP code were removed while all data model tests were kept.
72 out of 72 live integration tests passing against the real Garmin API, covering 24 connectapi endpoints, 14 data model
.get()calls, 13 data model.list()calls, 12 stats class.list()calls, binary FIT file download, and token persistence through dump, load, dumps, and loads.Known limitations
page.evaluate()cannot be called from multiple threads.Data.list()was changed to run sequentially instead of usingThreadPoolExecutor.garth.client. Multi-account requires separate processes.login()returns placeholder OAuth tokens with a sentinel value of"browser_session". The browser session handles real auth via cookies. Code that inspects token contents will see fake values.load()andloads()detect these and warn users to calllogin().return_on_mfanot supportedNotImplementedError. MFA is handled interactively in the browser via theprompt_mfacallback.proxiesandssl_verifynot supportedIntent
This is not a small patch. It's a fundamental transport change forced by Garmin's decisions to disable auth endpoints and deploy aggressive bot detection. The goal is to:
garth.login(),garth.connectapi(),garth.download(), data models, and stats classes all work exactly as beforeI'm not asking garth to endorse browser automation as an ideal architecture. I'm saying the community needs garth to work, and this is the only way to make it work today, unless someone in the community finds a different approach.
Summary by CodeRabbit
New Features
Improvements
Chores