Skip to content

Conversation

jonb377
Copy link

@jonb377 jonb377 commented Dec 19, 2023

To train with sequence lengths longer than 2048, we need to bypass the limitation enforced in run_clm, which is likely a safeguard for finetuning.

Long-term, it would also make sense to retrain a tokenizer with a larger sequence length, but this will unblock early tests with the default tokenizer.

Copy link

@suexu1025 suexu1025 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update!

@jonb377
Copy link
Author

jonb377 commented Dec 19, 2023

Thanks @suexu1025! Are you OK to work off of this branch? I'd prefer not to merge this into the llama2-google-next-training branch since we can train a new tokenizer to work around the existing limitation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants