You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
In some extremely challenging scenarios for information retrieval, items relevant to a query follow a long tail distribution. Thus, there are a few extremely frequent relevant items (head items) and many extremely rare relevant items (tail items). Since high values can be achieved for common-used metrics such as precision@k or nDCG@k just by considering the head items, it is necessary for metrics also to consider the reward of the retrieved item (defined as the inverse of propensity). For these scenarios, it is also recommended to measure the propensity-scored counterparts of the precision (psprecision@k) and nDCG (psnDCG@k).
Describe the solution you'd like
Presently, ranx offers an efficient method for assessing and contrasting ranking effectiveness, fusion algorithms, and normalization strategies. Consequently, incorporating the aforementioned features with Propensity-scored metrics would make it even more complete.
Below are some test cases based on the predicted ranking (pred) and the relevance map (true). I employ the pyxclib Python library to measure the propensity-scored metrics using the subsequent approach:
Then Propensity-scored precision and ndcg;
A validation of pyxclib using ranx concerning the starndard precision@k and ndcg@k.
The Propensity-scored precision and nDCG
It requires the prediction scores, true scores, and the (inverse) propensity:
The maximum value of the propensity-scored metrics occurs when the predicted ranking places all relevant items ahead of non-relevant items and follows the propensity order (that is, rarer items ahead of less rare items).
Is your feature request related to a problem? Please describe.
In some extremely challenging scenarios for information retrieval, items relevant to a query follow a long tail distribution. Thus, there are a few extremely frequent relevant items (head items) and many extremely rare relevant items (tail items). Since high values can be achieved for common-used metrics such as precision@k or nDCG@k just by considering the head items, it is necessary for metrics also to consider the reward of the retrieved item (defined as the inverse of propensity). For these scenarios, it is also recommended to measure the propensity-scored counterparts of the precision (psprecision@k) and nDCG (psnDCG@k).
Describe the solution you'd like
Presently, ranx offers an efficient method for assessing and contrasting ranking effectiveness, fusion algorithms, and normalization strategies. Consequently, incorporating the aforementioned features with Propensity-scored metrics would make it even more complete.
Test cases
NOTE: These test cases are available on Google Colab and all the data can be downloaded from Propensity-scored Metrics-Files.zip and from GDrive.
Below are some test cases based on the predicted ranking (pred) and the relevance map (true). I employ the pyxclib Python library to measure the propensity-scored metrics using the subsequent approach:
The Propensity-scored precision and nDCG
It requires the prediction scores, true scores, and the (inverse) propensity:
Then, the measured metrics (psprecision and psnDCG) assume:
The ideal Propensity-scored precision and nDCG
The maximum value of the propensity-scored metrics occurs when the predicted ranking places all relevant items ahead of non-relevant items and follows the propensity order (that is, rarer items ahead of less rare items).
A validation of pyxclib using ranx concerning the standard precision@k and ndcg@k.
To validate the results, I performed a comparison of the usual precision and nDCG metrics using pycxlib and ranx.
The text was updated successfully, but these errors were encountered: