You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The predictions of a linear model are invariant to the scale of its weights. Thus the scale of weights are determined by regularisation (and, I think, the bias term if unregularised). Are weights therefore more comparable across competing linear models if scaled by default (e.g. to unit vector)?
The text was updated successfully, but these errors were encountered:
Coefficient scale affects probability output - if coefficients are large then classifier is more "confident" - probabilities are closer to 0 and 1, at least for logistic regression.
Currently we're normalizing coefficients when computing colors in show_weights / show_prediction, so currently looking at the colors is a way to compare two models with different coefficient scale.
I can see how normalizing coefficients to unit scale (and showing this scale) can be helpful. But it also can be more confusing - what user see is no longer vanilla coefficients.
The predictions of a linear model are invariant to the scale of its weights. Thus the scale of weights are determined by regularisation (and, I think, the bias term if unregularised). Are weights therefore more comparable across competing linear models if scaled by default (e.g. to unit vector)?
The text was updated successfully, but these errors were encountered: