-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check how explain_weights works on regression problems #175
Comments
Hm, I'm not sure bias makes sense for explain_weights + xgboost regressor; regressor predicts values regardless of mean; GBMs can handle shifts in data without any special handling, there is no need to account for bias explicitly. I haven't seen feature importances for "bias" in decison trees or ensembles. But maybe I'm wrong and it is possible to introduce some notion of bias which makes sense. For example, in LIghtGBM the first iteration is a synthetic tree which always predicts bias; while not required, as I understand, it helps with convergence in practice. So maybe the way to look at it is to compare first tree and next trees in the ensemble, or maybe check several of the low-iteration trees; this is a more general approach which is not specific to bias. I haven't tried it, but it may work. |
@lopuhin in regards of the BIAS, I wanted to comment here but then noticed the issue is closed. If BIAS ends up being the most "relevant" feature explained, doesn't it mean that the shift from the mean in the path taken is minimal and none of the feature play a key role? In other words, no feature is a discriminator for this prediction, it's just the expected value? |
@alzmcr yes, I think your interpretation is fair 👍 |
More of a note to self - I'll expand it into something more reproducible or close:
The text was updated successfully, but these errors were encountered: