Skip to content

Conversation

SulRash
Copy link

@SulRash SulRash commented Jul 3, 2025

What does this PR do?

The issue was that the code was trying to check if "s3" was in dataset.folder_path, but folder_path was a DataFolder object instead of a string. The fix now handles both cases, if it's already a string, it uses it directly; otherwise, it converts the object to a string. The assertion was already listed as "TODO: Remove".

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guidelines?
  • Did you write any new necessary tests?
  • Did you log the throughput and loss you get to ensure the PR works as expected in actual training?
  • Did you log the memory usage? you can use this tool to understand the memory usage breakdown in nanotron.
  • If you modified anything related to checkpoints, did you verify that saving and reloading checkpoints still works correctly?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@NouamaneTazi My 4 pull requests together actually end up fixing pretraining with the example script with multiple different pretraining data stages.

NouamaneTazi and others added 10 commits April 14, 2025 14:16
* Update clm_collator.py

* can only merge to main from dev (huggingface#348)

---------

Co-authored-by: Nouamane Tazi <[email protected]>
…face#349)

* InitScalingMethod

* InitScalingMethod

* run evals in background (huggingface#352)

* eval

* try adding lightevalrunner to trainer

* amend

* amend

* amend

* amend

* amend

* amend

* .

* amend

* amend

* .

* qos to low

* add nanotron_path

* some fix: logs, and config

* cp instead of sync

* eval_interval

* serialize sanity checks

* add output dir and s3_save path in the config

* fix s3 only if define

* fixes

---------

Co-authored-by: elie <[email protected]>
Co-authored-by: “eliebak” <[email protected]>

---------

Co-authored-by: elie <[email protected]>
Co-authored-by: “eliebak” <[email protected]>
huggingface#346)

* [Feature] Implement CUDA event-based timing for improved GPU performance measurement

* can only merge to main from dev (huggingface#348)

* Fix timer decorator logic: Support both CPU and CUDA timers and update docs

* Fix timer decorator logic: support both CPU and CUDA; update docs

---------

Co-authored-by: Kabir Grewal <[email protected]>
Co-authored-by: Nouamane Tazi <[email protected]>
Co-authored-by: Kabir Grewal <[email protected]>
* can only merge to main from dev (huggingface#348)

* move moe from qwen modeling to src/nn

* add groupedmlp

* add token permute and unpermute

* fix num_tokens_per_expert counting < num_experts

* fix init and init scaling factor and run evals in background (huggingface#353)

* can only merge to main from dev

* Fix UnBoundLocalError in `clm_collator.py` (huggingface#339)

* Update clm_collator.py

* can only merge to main from dev (huggingface#348)

---------

Co-authored-by: Nouamane Tazi <[email protected]>

* fix init and init scaling factor and run evals in background (huggingface#349)

* InitScalingMethod

* InitScalingMethod

* run evals in background (huggingface#352)

* eval

* try adding lightevalrunner to trainer

* amend

* amend

* amend

* amend

* amend

* amend

* .

* amend

* amend

* .

* qos to low

* add nanotron_path

* some fix: logs, and config

* cp instead of sync

* eval_interval

* serialize sanity checks

* add output dir and s3_save path in the config

* fix s3 only if define

* fixes

---------

Co-authored-by: elie <[email protected]>
Co-authored-by: “eliebak” <[email protected]>

---------

Co-authored-by: elie <[email protected]>
Co-authored-by: “eliebak” <[email protected]>

---------

Co-authored-by: Connector Switch <[email protected]>
Co-authored-by: elie <[email protected]>
Co-authored-by: “eliebak” <[email protected]>

* inference qwen moe seems to work

inference seems good rn

* update readme

* fix router's weight initialization and wrong hidden size for non-moe mlp in qwen

* add source for router weight and router logits in float32

* fixes

* .

* .

* add parametrize grouped mlp in column and row linear

* add logging per-param grad norm

* fix conversation fail due to buffer on cpu

* config_qwen

* .

* .

* fix moe convert config

---------

Co-authored-by: Nouamane Tazi <[email protected]>
Co-authored-by: Connector Switch <[email protected]>
Co-authored-by: elie <[email protected]>
Co-authored-by: “eliebak” <[email protected]>
Co-authored-by: zzhhjjj <[email protected]>
Co-authored-by: nouamanetazi <[email protected]>
* InitScalingMethod

* InitScalingMethod

* eval

* try adding lightevalrunner to trainer

* amend

* amend

* amend

* amend

* amend

* amend

* .

* amend

* amend

* .

* qos to low

* add nanotron_path

* some fix: logs, and config

* cp instead of sync

* eval_interval

* serialize sanity checks

* add output dir and s3_save path in the config

* fix s3 only if define

* fixes

* add requeue

* add wandb with lighteval and fix eval interval

* fix this little space :(

* folder_path should always have s3 when using s3 (fix consumed tokens issue)

* config qwen

* .

---------

Co-authored-by: elie <[email protected]>
Co-authored-by: “eliebak” <[email protected]>
* InitScalingMethod

* InitScalingMethod

* eval

* try adding lightevalrunner to trainer

* amend

* amend

* amend

* amend

* amend

* amend

* .

* amend

* amend

* .

* qos to low

* add nanotron_path

* some fix: logs, and config

* cp instead of sync

* eval_interval

* serialize sanity checks

* add output dir and s3_save path in the config

* fix s3 only if define

* fixes

* add requeue

* add wandb with lighteval and fix eval interval

* fix this little space :(

* folder_path should always have s3 when using s3 (fix consumed tokens issue)

* fix resuming with new data mixture

* offsets must be in samples not tokens

* sanity check local files when dataset_read_path

* better error for new stage

* rmsnorm

* sliding window

* causal SWA

* Revert "rmsnorm"

This reverts commit 17dad0a.

* rope_seq_len_interpolation_factor

* logmixin for intermediate tensors + CP + consumed_token shenanigans when resuming training (huggingface#365)

* logmixin

* context parallelism (llama3 ring attn) + consumed_token shenanigans (huggingface#366)

* training works

* llama3 ring attn

* llama3 ring attn

* llama3 ring attn

* fix position_ids (make them global)

* rope_seq_len_interpolation_factor assert

* .

* .

* fix rope and cp_pg

* fixed consumed_tokens log

---------

Co-authored-by: elie <[email protected]>
Co-authored-by: “eliebak” <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants