Skip to content

harness: stabilize embed OOM tuning and defaults#1823

Open
jioffe502 wants to merge 2 commits intoNVIDIA:mainfrom
jioffe502:feature/embed-oom-stability-graph-port
Open

harness: stabilize embed OOM tuning and defaults#1823
jioffe502 wants to merge 2 commits intoNVIDIA:mainfrom
jioffe502:feature/embed-oom-stability-graph-port

Conversation

@jioffe502
Copy link
Copy Markdown
Collaborator

@jioffe502 jioffe502 commented Apr 8, 2026

TLDR

This PR hardens embedding stability by separating two different control knobs:

  • embed_batch_size for Ray Data transport/scheduling, and
  • embed_inference_batch_size for local model forward-pass VRAM pressure.

It keeps the change focused on core runtime behavior: decoupled tuning controls plus adaptive OOM retry.

Problem

During bo767 validation, we observed long-sequence tail batches driving CUDA OOM-like failures and throughput volatility. The key discovery is that one batch knob is doing two jobs:

  • scheduler/transport sizing (embed_batch_size), and
  • model microbatch sizing (inference_batch_size).

That coupling makes stability tuning hard, especially when sequence lengths are heterogeneous.

What Changed

1) Decoupled controls in harness/CLI

  • Added embed_inference_batch_size to harness config/defaults/validation.
  • Added CLI wiring so harness passes --embed-inference-batch-size through to runtime.
  • Bound embed params to prefer embed_inference_batch_size over embed_batch_size for model forward-pass batching.

2) Adaptive OOM stabilization in local embed runtime

  • On CUDA OOM-like errors, automatically reduce local embed microbatch and retry.
  • After successful streaks, recover batch size gradually toward target.
  • Emit warnings with char/token diagnostics for failing chunks.

3) Runtime parity + operational controls

  • Added Ray object-store sizing support via RAY_DEFAULT_OBJECT_STORE_MEMORY_PROPORTION / RAY_OBJECT_STORE_MEMORY_BYTES.
  • Forwarded runtime env/logging controls into Ray init path.

Quantitative Evidence (bo767)

All runs below use embed_inference_batch_size=32.

Run gpu_embed ingest PPS recall@5 Artifact
bo767_20260406_193158_UTC 1.0 152.99 0.8345 nemo_retriever/artifacts/bo767_20260406_193158_UTC/results.json
bo767_20260406_202408_UTC 1.0 153.56 0.8365 nemo_retriever/artifacts/bo767_20260406_202408_UTC/results.json
bo767_20260406_204929_UTC 1.0 151.22 0.8325 nemo_retriever/artifacts/bo767_20260406_204929_UTC/results.json
bo767_20260408_154723_UTC 1.0 146.45 0.8315 nemo_retriever/artifacts/bo767_20260408_154723_UTC/results.json
bo767_20260408_160637_UTC 1.0 146.05 0.8365 nemo_retriever/artifacts/bo767_20260408_160637_UTC/results.json
bo767_20260408_172412_UTC 1.0 146.53 0.8335 nemo_retriever/artifacts/bo767_20260408_172412_UTC/results.json
bo767_20260408_161941_UTC 0.25 160.26 0.8365 nemo_retriever/artifacts/bo767_20260408_161941_UTC/results.json
bo767_20260408_173706_UTC 0.25 155.54 0.8355 nemo_retriever/artifacts/bo767_20260408_173706_UTC/results.json

Key Takeaways

  • Decoupled microbatch control (embed_inference_batch_size=32) is stable and repeatable across runs.
  • Recall stays in a tight band (~0.833–0.836 recall@5).
  • Throughput is strongly sensitive to gpu_embed policy in these validations:
    • gpu_embed=1.0 cohort: ~146 PPS
    • gpu_embed=0.25 cohort: >150 PPS
  • Adaptive OOM handling keeps runs progressing through long-tail batches instead of hard failing.

Recommendation

  • Merge this as the system-level OOM stabilization/control layer.
  • Keep default embed_inference_batch_size=32 in harness presets.
  • Treat gpu_embed as an explicit policy knob for follow-up performance decisioning (isolation vs throughput).

Why This Is Safe

  • Additive changes around tuning and runtime behavior; no API removals.
  • Retry logic activates only on OOM-like failures.
  • Conservative default microbatch (32) aligns with observed stability.
  • Harness/graph/resource tests pass.

When To Override

  • Increase embed_inference_batch_size only with sustained VRAM headroom and zero OOM retry pressure.
  • Decrease it for long-sequence-heavy corpora or increasing OOM retry frequency.
  • Tune gpu_embed according to target objective:
    • stability isolation: closer to 1.0
    • throughput: evaluate lower fractions (e.g., 0.25) with recall guardrails.

Rollout

  1. Merge with embed_inference_batch_size=32 defaults.
  2. Validate bo767/jp20 in nightly.
  3. Track OOM retry frequency and recall@5 drift.
  4. Finalize gpu_embed performance policy in a dedicated follow-up decision.

Rollback

  • Revert this commit to restore previous runtime behavior.
  • Immediate mitigation without code rollback: override embed_inference_batch_size and/or gpu_embed via harness CLI.

Open Questions For Lead Decision

  • Which gpu_embed policy should define productized performance baselines?
  • Should OOM retry events be informational telemetry only, or a gating signal for stability-required runs?

Test Plan

  • uv run pytest tests/test_harness_config.py tests/test_harness_run.py tests/test_resource_heuristics.py
  • uv run pytest tests/test_ingest_interface.py tests/test_graph_pipeline_registry.py
  • uv run pytest tests/test_create_local_embedder.py tests/test_multimodal_embed.py tests/test_operator_flags_and_cpu_actors.py
  • bo767 repeated harness validations with embed_inference_batch_size=32 (artifacts listed above)

Port embed OOM hardening and decoupled inference microbatch controls into the graph-era harness path so bo767 runs can be tuned for stability without conflating Ray scheduling batch size. Capture OOM outlier metadata for attribution, restore graph Ray init parity knobs, and document the refactor/validation context for lead review.

Signed-off-by: jioffe502 <jioffe@nvidia.com>
@jioffe502 jioffe502 requested review from a team as code owners April 8, 2026 18:41
@jioffe502 jioffe502 requested a review from jperez999 April 8, 2026 18:41
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot bot commented Apr 8, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@jioffe502 jioffe502 changed the title harness: Stabilize graph embed OOM tuning and defaults harness: stabilize embed OOM tuning and defaults Apr 8, 2026
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Apr 8, 2026

Greptile Summary

This PR decouples two embedding control knobs — embed_batch_size (Ray Data transport scheduling) and embed_inference_batch_size (GPU forward-pass microbatching) — and adds adaptive OOM retry with streak-based batch-size recovery. The harness, CLI, and EmbedParams are updated accordingly, and GraphIngestor gains Ray object-store sizing support.

The harness path is functionally correct: --embed-inference-batch-size is mapped to EmbedParams.inference_batch_size (line 446 of graph_pipeline.py), which is the parameter actually consumed by embed_text_main_text_embed(). However, three P2 items are worth tidying up before the next iteration.

Confidence Score: 5/5

Safe to merge; the harness path is functionally correct and all remaining findings are P2 cleanup items.

The primary user path (harness → graph_pipeline.py) correctly maps embed_inference_batch_size to inference_batch_size (line 446) which is the operative control. The adaptive OOM retry logic is sound. All three flagged issues are P2: one dead code line, one stale comment, and one logging style concern. None block correct operation.

graph_pipeline.py (dead embed_inference_batch_size line 447) and llama_nemotron_embed_1b_v2_embedder.py (warnings.warn deduplication)

Important Files Changed

Filename Overview
nemo_retriever/src/nemo_retriever/model/local/llama_nemotron_embed_1b_v2_embedder.py Adds adaptive OOM retry with halving/streak recovery; logic is sound. warnings.warn deduplication may hide repeated OOM events from operators. Previously-flagged bare-except and stale-state issues remain open.
nemo_retriever/src/nemo_retriever/examples/graph_pipeline.py New --embed-inference-batch-size CLI arg correctly feeds into EmbedParams.inference_batch_size (line 446). Line 447 also sets embed_inference_batch_size in the dict, but this field is silently discarded by embed_text_main_text_embed() via **_: Any.
nemo_retriever/src/nemo_retriever/harness/config.py embed_inference_batch_size cleanly added to HarnessConfig, TUNING_FIELDS, validation loop, and env-override map. No issues.
nemo_retriever/src/nemo_retriever/harness/run.py Correctly forwards --embed-inference-batch-size only in the non-heuristics path (consistent with how all other explicit tuning params are handled).
nemo_retriever/src/nemo_retriever/utils/ray_resource_hueristics.py EMBED_BATCH_SIZE comment on line 25 says 'Ray batch size AND EMBEDDING inference batch size' — now stale after this PR decouples the two knobs.
nemo_retriever/src/nemo_retriever/graph_ingestor.py Adds _resolve_object_store_memory_bytes() with proper proportion/bytes env-var handling and /dev/shm capping. No issues.

Sequence Diagram

sequenceDiagram
    participant H as Harness run.py
    participant GP as graph_pipeline.py
    participant EP as EmbedParams
    participant BA as BatchEmbedGPUActor
    participant RT as embed_text_main_text_embed
    participant EM as LlamaNemotronEmbedder

    H->>GP: --embed-batch-size 256 --embed-inference-batch-size 32
    GP->>EP: inference_batch_size=32 via line 446
    GP->>EP: embed_inference_batch_size=32 via line 447 (dead field)
    EP->>BA: model_dump sends both fields
    BA->>RT: inference_batch_size=32 matched, embed_inference_batch_size dropped by kwargs
    RT->>EM: model.embed(batch, batch_size=32)
    Note over EM: OOM triggers halve and retry
    Note over EM: 3 successes triggers batch size growth
    EM-->>RT: CPU tensor shape N x D
    RT-->>BA: DataFrame with embeddings
    BA-->>GP: result batch
Loading
Prompt To Fix All With AI
This is a comment left during a code review.
Path: nemo_retriever/src/nemo_retriever/examples/graph_pipeline.py
Line: 446-447

Comment:
**`embed_inference_batch_size` in `EmbedParams` is silently discarded**

`embed_text_main_text_embed()` absorbs unknown keyword arguments via `**_: Any`, so line 447's `"embed_inference_batch_size": embed_inference_batch_size or None` is set on `EmbedParams` but never read by the runtime. The operative control knob is `inference_batch_size` on line 446 — that is what flows through to `model.embed(batch, batch_size=...)`. Line 447 is dead code and creates a misleading API: a direct Python API caller who does `EmbedParams(embed_inference_batch_size=64)` will silently see no change in forward-pass batching.

Consider either removing line 447 (the value is already fully captured by line 446), or wiring `embed_inference_batch_size` as a named parameter in `embed_text_main_text_embed()` if it is meant to carry independent semantics in the future.

```suggestion
                    "inference_batch_size": embed_inference_batch_size or embed_batch_size or None,
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: nemo_retriever/src/nemo_retriever/utils/ray_resource_hueristics.py
Line: 25

Comment:
**Stale comment after decoupling**

The comment `# Ray batch size AND EMBEDDING inference batch size` now contradicts the core intent of this PR. After decoupling, `EMBED_BATCH_SIZE` is the Ray Data transport/scheduling batch size only; the inference microbatch is controlled by `embed_inference_batch_size` (default 32 in `HarnessConfig`).

```suggestion
EMBED_BATCH_SIZE = 256  # Ray Data transport/scheduling batch size (not the model forward-pass microbatch)
```

How can I resolve this? If you propose a fix, please make it concise.

---

This is a comment left during a code review.
Path: nemo_retriever/src/nemo_retriever/model/local/llama_nemotron_embed_1b_v2_embedder.py
Line: 258-263

Comment:
**`warnings.warn` deduplicates — repeated OOM events go silent**

Python's warnings machinery shows the same `(message, category, module, lineno)` tuple only once per interpreter session by default. In a long-running embedding job that hits OOM many times across different batches, only the first OOM retry warning will surface; subsequent ones are silently swallowed. This makes it impossible for operators to track OOM frequency from logs.

Using `logger.warning(...)` instead ensures every event is emitted and respects the application's log-level configuration. The same applies to the identical `warnings.warn` call in the `RuntimeError` handler below (line ~279).

```suggestion
                        logger.warning(
                            "CUDA OOM during embedding; retrying with batch_size=%d "
                            "(requested=%d, %s)",
                            current_bs,
                            target_bs,
                            diag,
                        )
```

How can I resolve this? If you propose a fix, please make it concise.

Reviews (3): Last reviewed commit: "refactor(embed): remove debug-only OOM a..." | Re-trigger Greptile

Comment on lines +271 to +281
capture_path = Path(capture_path_raw).expanduser()
capture_is_dir = capture_path_raw.endswith("/") or (capture_path.exists() and capture_path.is_dir())
if capture_is_dir:
capture_path.mkdir(parents=True, exist_ok=True)
capture_file = capture_path / f"embed_oom_outliers_pid{os.getpid()}.jsonl"
else:
capture_path.parent.mkdir(parents=True, exist_ok=True)
capture_file = capture_path

line = json.dumps(event, ensure_ascii=True) + "\n"
with capture_file.open("a", encoding="utf-8") as f:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Unguarded I/O in _capture_oom_outlier_event can abort OOM retry

_capture_oom_outlier_event is called from inside except torch.cuda.OutOfMemoryError: and except RuntimeError: blocks where the intent is to halve the batch size and retry. If capture_path.mkdir() or capture_file.open(...) raises (disk full, permission denied, race on directory creation) the new I/O exception propagates out of the except-handler, replacing the OOM and killing the embedding job instead of retrying. The whole function must be wrapped in a top-level try/except so that any failure is silently dropped rather than breaking recovery.

try:
    capture_path = Path(capture_path_raw).expanduser()
    capture_is_dir = capture_path_raw.endswith("/") or (capture_path.exists() and capture_path.is_dir())
    if capture_is_dir:
        capture_path.mkdir(parents=True, exist_ok=True)
        capture_file = capture_path / f"embed_oom_outliers_pid{os.getpid()}.jsonl"
    else:
        capture_path.parent.mkdir(parents=True, exist_ok=True)
        capture_file = capture_path

    line = json.dumps(event, ensure_ascii=True) + "\n"
    with capture_file.open("a", encoding="utf-8") as f:
        try:
            import fcntl
            fcntl.flock(f.fileno(), fcntl.LOCK_EX)
            f.write(line)
            f.flush()
            fcntl.flock(f.fileno(), fcntl.LOCK_UN)
        except Exception:
            f.write(line)
            f.flush()
    self._oom_capture_event_count += 1
except Exception as capture_exc:
    logger.debug("OOM capture write failed (non-fatal): %s", capture_exc)
Prompt To Fix With AI
This is a comment left during a code review.
Path: nemo_retriever/src/nemo_retriever/model/local/llama_nemotron_embed_1b_v2_embedder.py
Line: 271-281

Comment:
**Unguarded I/O in `_capture_oom_outlier_event` can abort OOM retry**

`_capture_oom_outlier_event` is called from inside `except torch.cuda.OutOfMemoryError:` and `except RuntimeError:` blocks where the intent is to halve the batch size and retry. If `capture_path.mkdir()` or `capture_file.open(...)` raises (disk full, permission denied, race on directory creation) the new I/O exception propagates out of the except-handler, replacing the OOM and killing the embedding job instead of retrying. The whole function must be wrapped in a top-level `try/except` so that any failure is silently dropped rather than breaking recovery.

```python
try:
    capture_path = Path(capture_path_raw).expanduser()
    capture_is_dir = capture_path_raw.endswith("/") or (capture_path.exists() and capture_path.is_dir())
    if capture_is_dir:
        capture_path.mkdir(parents=True, exist_ok=True)
        capture_file = capture_path / f"embed_oom_outliers_pid{os.getpid()}.jsonl"
    else:
        capture_path.parent.mkdir(parents=True, exist_ok=True)
        capture_file = capture_path

    line = json.dumps(event, ensure_ascii=True) + "\n"
    with capture_file.open("a", encoding="utf-8") as f:
        try:
            import fcntl
            fcntl.flock(f.fileno(), fcntl.LOCK_EX)
            f.write(line)
            f.flush()
            fcntl.flock(f.fileno(), fcntl.LOCK_UN)
        except Exception:
            f.write(line)
            f.flush()
    self._oom_capture_event_count += 1
except Exception as capture_exc:
    logger.debug("OOM capture write failed (non-fatal): %s", capture_exc)
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +83 to +93
try:
tokenized = tokenizer(
list(chunk),
padding=False,
truncation=True,
max_length=max(1, int(max_length)),
return_length=True,
)
token_lengths = tokenized.get("length")
except Exception:
token_lengths = None
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Silent exception swallow in tokenization path

The bare except Exception: on the tokenizer call in _batch_length_summary swallows tokenization errors without logging. Since this helper is called during OOM recovery to build diagnostic information, a silent failure means the warning message emitted to the user will have tok_max=None, tok_p95=None with no indication that tokenization itself failed. Consider adding a logger.debug(...) at minimum so OOM diagnostic information gaps are traceable.

Prompt To Fix With AI
This is a comment left during a code review.
Path: nemo_retriever/src/nemo_retriever/model/local/llama_nemotron_embed_1b_v2_embedder.py
Line: 83-93

Comment:
**Silent exception swallow in tokenization path**

The bare `except Exception:` on the tokenizer call in `_batch_length_summary` swallows tokenization errors without logging. Since this helper is called during OOM recovery to build diagnostic information, a silent failure means the warning message emitted to the user will have `tok_max=None, tok_p95=None` with no indication that tokenization itself failed. Consider adding a `logger.debug(...)` at minimum so OOM diagnostic information gaps are traceable.

How can I resolve this? If you propose a fix, please make it concise.

Drop per-item metadata and JSONL OOM outlier capture paths from the local embed runtime while retaining adaptive OOM retry and decoupled inference batch controls. This keeps the PR focused on core stability behavior and reduces instrumentation overhead.

Signed-off-by: jioffe502 <jioffe@nvidia.com>

outs: List[torch.Tensor] = []
target_bs = max(1, int(batch_size))
current_bs = min(target_bs, self._adaptive_batch_size) if self._adaptive_batch_size is not None else target_bs
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Adaptive batch size reads stale state across calls

self._adaptive_batch_size is actor-level state that persists between embed() calls. On a fresh call with a new (larger) batch_size argument, line 224 initialises current_bs = min(target_bs, self._adaptive_batch_size), which silently caps the first batch at whatever the worst historical OOM level was, even if VRAM pressure has since been relieved. The streak-based growth inside the loop never propagates back across the call boundary. Consider resetting when target_bs exceeds the stored value, or document the cross-call persistence explicitly.

current_bs = (
    min(target_bs, self._adaptive_batch_size)
    if self._adaptive_batch_size is not None and self._adaptive_batch_size < target_bs
    else target_bs
)
Prompt To Fix With AI
This is a comment left during a code review.
Path: nemo_retriever/src/nemo_retriever/model/local/llama_nemotron_embed_1b_v2_embedder.py
Line: 224

Comment:
**Adaptive batch size reads stale state across calls**

`self._adaptive_batch_size` is actor-level state that persists between `embed()` calls. On a fresh call with a new (larger) `batch_size` argument, line 224 initialises `current_bs = min(target_bs, self._adaptive_batch_size)`, which silently caps the first batch at whatever the worst historical OOM level was, even if VRAM pressure has since been relieved. The streak-based growth inside the loop never propagates back across the call boundary. Consider resetting when `target_bs` exceeds the stored value, or document the cross-call persistence explicitly.

```python
current_bs = (
    min(target_bs, self._adaptive_batch_size)
    if self._adaptive_batch_size is not None and self._adaptive_batch_size < target_bs
    else target_bs
)
```

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +284 to +293
if not outs:
return torch.empty((0, 0), dtype=torch.float32)

sorted_embeddings = torch.cat(outs, dim=0)
reordered_embeddings: List[torch.Tensor | None] = [None] * len(sorted_to_original)
for sorted_idx, original_idx in enumerate(sorted_to_original):
reordered_embeddings[original_idx] = sorted_embeddings[sorted_idx]
if any(emb is None for emb in reordered_embeddings):
raise RuntimeError("Failed to reconstruct embedding order after length sorting.")
return torch.stack([emb for emb in reordered_embeddings if emb is not None], dim=0)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Length mismatch between sorted_embeddings and sorted_to_original when OOM leaves partial results

If OOM is raised when current_bs <= 1 (re-raised, not retried), execution exits the while loop with outs containing embeddings for only the texts processed before the fatal OOM. torch.cat(outs) produces a tensor shorter than len(sorted_to_original), and the loop for sorted_idx, original_idx in enumerate(sorted_to_original) will raise IndexError when sorted_idx exceeds that length. The if any(emb is None …) guard never executes because the IndexError fires first. Currently unreachable because the fatal re-raise exits before this code, but it is a correctness landmine for future refactors. Add an explicit length pre-check before the indexing loop.

Prompt To Fix With AI
This is a comment left during a code review.
Path: nemo_retriever/src/nemo_retriever/model/local/llama_nemotron_embed_1b_v2_embedder.py
Line: 284-293

Comment:
**Length mismatch between `sorted_embeddings` and `sorted_to_original` when OOM leaves partial results**

If OOM is raised when `current_bs <= 1` (re-raised, not retried), execution exits the while loop with `outs` containing embeddings for only the texts processed before the fatal OOM. `torch.cat(outs)` produces a tensor shorter than `len(sorted_to_original)`, and the loop `for sorted_idx, original_idx in enumerate(sorted_to_original)` will raise `IndexError` when `sorted_idx` exceeds that length. The `if any(emb is None …)` guard never executes because the IndexError fires first. Currently unreachable because the fatal re-raise exits before this code, but it is a correctness landmine for future refactors. Add an explicit length pre-check before the indexing loop.

How can I resolve this? If you propose a fix, please make it concise.

@jioffe502 jioffe502 marked this pull request as draft April 8, 2026 19:25
@jioffe502
Copy link
Copy Markdown
Collaborator Author

jioffe502 commented Apr 9, 2026

DGX bo767 sweep evidence

Ran the dedicated 6-run dgx_8gpu matrix on bo767 with:

  • embed_inference_batch_size in {32,64,96}
  • gpu_embed in {0.25,1.0}
  • fixed embed_batch_size=256, use_heuristics=false

Observed results:

embed_inference_batch_size gpu_embed=0.25 gpu_embed=1.0 Delta vs 1.0
32 158.68 PPS, recall@5=0.8355 147.72 PPS, recall@5=0.8335 +10.96 PPS (+7.42%)
64 162.26 PPS, recall@5=0.8355 141.94 PPS, recall@5=0.8355 +20.32 PPS (+14.32%)
96 158.00 PPS, recall@5=0.8355 135.93 PPS, recall@5=0.8375 +22.07 PPS (+16.24%)
  • gpu_embed=0.25 beat gpu_embed=1.0 at every batch size with effectively flat recall@5.
  • Best observed throughput was mb64,gpu_embed=0.25 at 162.26 PPS.
  • mb64,gpu=0.25 is only 2.26% faster than mb32,gpu=0.25, so mb32 remains a plausible conservative fallback.
  • mb96 should not be the default. It is slower than mb64 under both GPU policies and still showed OOM-retry behavior in the finished session output.
  • Historical gpu_embed=1.0 controls show the same shape: mb32=152.99, mb64=145.45, mb96=145.64 PPS.

Recommendation:

  • Default gpu_embed=0.25.
  • If OOM retries are telemetry-only, default embed_inference_batch_size=64.
  • If we want a more conservative default, use embed_inference_batch_size=32 and document 64 as the throughput override.
  • Do not default 96.

@jioffe502
Copy link
Copy Markdown
Collaborator Author

bo767 repeat sweep follow-up at gpu_embed=0.25

Ran a 40-run DGX bo767 sweep with gpu_embed=0.25 fixed and embed_inference_batch_size in {1,32,64,256}, repeated 10x per setting.

embed_inference_batch_size Passes Mean PPS PPS stdev PPS range Mean recall@5 recall@5 stdev OOM evidence
1 10/10 152.82 1.38 151.46-156.59 0.8355 0.0006 none
32 10/10 159.34 1.40 156.94-162.11 0.8354 0.0014 none
64 10/10 158.64 1.67 156.68-163.34 0.8309 0.0108 none
256 9/10 158.51* 1.59* 156.12-161.49* 0.8124* 0.0246* 1 failed run, 8 OOM retries in the failed run

* computed over the 9 successful mb=256 runs only.

Takeaways:

  • 32 is the best default candidate in this repeat sweep: highest mean PPS, no OOM retries, and stable recall.
  • 64 is within observed run-to-run noise on ingest (-0.70 PPS, -0.44% vs 32) and shows worse recall stability, so this does not support moving the default from 32 to 64.
  • 1 is clearly slower (-6.52 PPS, -4.09% vs 32) with no stability upside.
  • 256 is not a viable default candidate: one run failed, the failed run logged 8 CUDA OOM during embedding; retrying events, and the successful runs still showed materially worse recall stability.
  • This is directionally consistent with the current HF/local embedding path not realizing reliable end-to-end gains from larger microbatches in this workload.
  • The VDB tail is already de-bottlenecked (VDBUploadOperator batch_size=64), so pushing embed microbatch higher is not buying a downstream write-path win here; in practice it mainly adds memory pressure and instability.

Recommendation / next step:

  • If we want one default from the current evidence, this repeat sweep supports embed_inference_batch_size=32 at gpu_embed=0.25.
  • From here, I think there are two reasonable paths:
    1. Land this change with 32 as the default and treat broader SKU validation as follow-up work.
    2. Keep testing in this PR, but narrow it to cross-SKU validation focused on 32 vs 64 rather than continuing to probe larger batch sizes.

@jioffe502 jioffe502 marked this pull request as ready for review April 10, 2026 20:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant