-
Notifications
You must be signed in to change notification settings - Fork 332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
explain_weights under cross validation #198
Comments
A built-in way to aggregare Explanation objects sounds like a good idea. I'm not sure I like the idea of providing cross-validation utilities in eli5 though. How do you see this feature, would a function to get a single Explanation from multiple Explanations work for you? I guess DataFrame support (#196) could also make the problem a bit easier. |
I mean, given a list of models trained on the same set of features, assign
each a weight (and perhaps an uncertainty) to be reported, perhaps with an
L2 norm, perhaps L1, perhaps L0, Linf, or something else entirely
On 19 May 2017 1:30 am, "Mikhail Korobov" <[email protected]> wrote:
A built-in way to aggregare Explanation objects sounds like a good idea.
I'm not sure I like the idea of providing cross-validation utilities in
eli5 though. How do you see this feature, would a function to get a single
Explanation from multiple Explanations work for you?
I guess DataFrame support (#196
<#196>) could also make the
problem a bit easier.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#198 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz64alzEXqbk0luj2rzm8-oQUc8rD0ks5r7GP8gaJpZM4NdxTw>
.
|
And by something else I suppose I mean incorporating rank rather than value
…On 19 May 2017 1:30 am, "Mikhail Korobov" ***@***.***> wrote:
A built-in way to aggregare Explanation objects sounds like a good idea.
I'm not sure I like the idea of providing cross-validation utilities in
eli5 though. How do you see this feature, would a function to get a single
Explanation from multiple Explanations work for you?
I guess DataFrame support (#196
<#196>) could also make the
problem a bit easier.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#198 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAEz64alzEXqbk0luj2rzm8-oQUc8rD0ks5r7GP8gaJpZM4NdxTw>
.
|
Visualizing the coefficients obtained in a cross-validation to evaluate their stability as done is http://gael-varoquaux.info/interpreting_ml_tuto/content/02_why/01_interpreting_linear_models.html#stability-to-gauge-significance could also be quite useful. |
I think it would be useful to have a tool which identified the most important features for a series of models trained on different data subsets. This is hard when feature extraction on transformation occurs as it is no longer easy to identify which features are involved in a big way. But in the simple case of a series of
feature_importances_
orcoef_
s, we should have a tool to combine them in one or a few ways and report overall importance.The text was updated successfully, but these errors were encountered: