You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, i read the code of computing term frequency in LexRankmetrics[term] = tf / max_tf.
I don't understand why do u divide the maximum count of word in that sentence rather than use the number of occurrences of the word in the sentence(described in the original lexrank paper), or just normalize it by dividing total count of all words?
The text was updated successfully, but these errors were encountered:
@IsakZhang Hi, it's tough question for me. I really don't remember why and if I diverted from the original paper. But I usually did such thing because I inspired somewhere else and it gave me the better results or I just did some experiments with better results.
Hi, i read the code of computing term frequency in LexRank
metrics[term] = tf / max_tf
.I don't understand why do u divide the maximum count of word in that sentence rather than use the number of occurrences of the word in the sentence(described in the original lexrank paper), or just normalize it by dividing total count of all words?
The text was updated successfully, but these errors were encountered: