Skip to content

Conversation

@tomaz-suller
Copy link

During experiments, we noticed the strategy suggested in the XGBoost ranker notebook -- of separating a holdout evaluation set to use as training set for the ranker model -- did not provide us any reasonable results. To tackle this and allow us to train the ranker using the complete training set, we developed the scripts in this PR:

  • run_train_kfold.py trains recommender models using K-fold cross-validation, and saves the model trained on each fold for later inference;
  • run_ranker_dataset.py uses the saved models to compute item features, as well as several other features derived from user interactions and item features.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant