Skip to content

Conversation

@yeyu-nvidia
Copy link
Contributor

What does this PR do?

new feature,

Overview:
This PR implements HF parallel draft by combining eagle and medusa. In training, multiple medusa heads are added and trained together with eagle. In inference, medusa heads are used to generate draft tokens after all eagle tokens.

Usage

Set parallel_draft_step>1 in eagle_config to enable parallel draft.

# Add a code snippet demonstrating how to use this

Testing

Before your PR is "Ready for review"

  • Make sure you read and follow Contributor guidelines and your commits are signed.
  • Is this change backward compatible?: Yes/No
  • Did you write any new necessary tests?: Yes/No
  • Did you add or update any necessary documentation?: Yes/No
  • Did you update Changelog?: Yes/No

Additional Information

@yeyu-nvidia yeyu-nvidia requested a review from a team as a code owner December 8, 2025 19:07
@yeyu-nvidia yeyu-nvidia requested a review from h-guo18 December 8, 2025 19:07
@yeyu-nvidia yeyu-nvidia self-assigned this Dec 9, 2025
@yeyu-nvidia
Copy link
Contributor Author

/ok to test cdea9ed

@yeyu-nvidia
Copy link
Contributor Author

/ok to test f8a1088

@codecov
Copy link

codecov bot commented Dec 10, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 74.79%. Comparing base (51c3614) to head (d7139dd).
⚠️ Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main     #664   +/-   ##
=======================================
  Coverage   74.78%   74.79%           
=======================================
  Files         192      192           
  Lines       18814    18810    -4     
=======================================
- Hits        14070    14068    -2     
+ Misses       4744     4742    -2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

draft_logits_list = [eagle_logits]
if self.eagle_config.parallel_draft_step > 1:
# Get additional draft logits from parallel draft heads
for draft_head in self.eagle_module.parallel_draft_heads:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can optimize this for-loop and do it parallelly. Can be done in following PRs perhaps.

(
torch.zeros(b, ttt_step, dtype=loss_mask.dtype, device=loss_mask.device),
loss_mask[:, 1 + ttt_step :],
for i in range(self.eagle_config.parallel_draft_step):
Copy link
Contributor

@h-guo18 h-guo18 Dec 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar as above, this for-loop seems parallelizable

eagle_input_hidden_states = base_model_hidden_states

draft_tokens = []
for _ in range(steps):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The semantic of this steps argument seems ambiguous to me. Shall we rename it to eagle_steps? Then it means we do eagle_steps sequential drafting + num_medusa_heads parallel drafrting.

Copy link
Contributor

@h-guo18 h-guo18 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some comments and questions. Other changes in HF LGTM.
I think it would be great to test PTQ and AR regression test before merging. Thanks

@yeyu-nvidia yeyu-nvidia requested a review from a team as a code owner December 11, 2025 21:31
Signed-off-by: Ye Yu <[email protected]>
@yeyu-nvidia yeyu-nvidia enabled auto-merge (squash) December 12, 2025 21:29
@yeyu-nvidia yeyu-nvidia merged commit 8c6de51 into main Dec 13, 2025
36 checks passed
@yeyu-nvidia yeyu-nvidia deleted the yeyu/hf_eagle_medusa branch December 13, 2025 02:26
b7r6 pushed a commit to weyl-ai/Model-Optimizer that referenced this pull request Dec 18, 2025
## What does this PR do?
new feature,

**Overview:** 
This PR implements HF parallel draft by combining eagle and medusa. In
training, multiple medusa heads are added and trained together with
eagle. In inference, medusa heads are used to generate draft tokens
after all eagle tokens.

## Usage
Set parallel_draft_step>1 in eagle_config to enable parallel draft.

```python
# Add a code snippet demonstrating how to use this
```

## Testing
<!-- Mention how have you tested your change if applicable. -->

## Before your PR is "*Ready for review*"
<!-- If you haven't finished some of the above items you can still open
`Draft` PR. -->

- **Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)**
and your commits are signed.
- **Is this change backward compatible?**: Yes/No <!--- If No, explain
why. -->
- **Did you write any new necessary tests?**: Yes/No
- **Did you add or update any necessary documentation?**: Yes/No
- **Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?**:
Yes/No <!--- Only for new features, API changes, critical bug fixes or
bw breaking changes. -->

## Additional Information
<!-- E.g. related issue. -->

---------

Signed-off-by: Ye Yu <[email protected]>
soodoshll pushed a commit to soodoshll/TensorRT-Model-Optimizer that referenced this pull request Dec 18, 2025
## What does this PR do?
new feature,

**Overview:** 
This PR implements HF parallel draft by combining eagle and medusa. In
training, multiple medusa heads are added and trained together with
eagle. In inference, medusa heads are used to generate draft tokens
after all eagle tokens.

## Usage
Set parallel_draft_step>1 in eagle_config to enable parallel draft.

```python
# Add a code snippet demonstrating how to use this
```

## Testing
<!-- Mention how have you tested your change if applicable. -->

## Before your PR is "*Ready for review*"
<!-- If you haven't finished some of the above items you can still open
`Draft` PR. -->

- **Make sure you read and follow [Contributor
guidelines](https://github.com/NVIDIA/Model-Optimizer/blob/main/CONTRIBUTING.md)**
and your commits are signed.
- **Is this change backward compatible?**: Yes/No <!--- If No, explain
why. -->
- **Did you write any new necessary tests?**: Yes/No
- **Did you add or update any necessary documentation?**: Yes/No
- **Did you update
[Changelog](https://github.com/NVIDIA/Model-Optimizer/blob/main/CHANGELOG.rst)?**:
Yes/No <!--- Only for new features, API changes, critical bug fixes or
bw breaking changes. -->

## Additional Information
<!-- E.g. related issue. -->

---------

Signed-off-by: Ye Yu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants