Problem
The funding dimension (20/100 points) assumes all published research has external funding. This systematically disadvantages humanities, arts, and social science publishers whose content is legitimately unfunded.
Crossref's coverage API reports "X% of DOIs have funder metadata" but makes no distinction between:
- Missing metadata — funded research where the publisher didn't deposit acknowledgements (a real gap)
- Unfunded research — work that genuinely had no external funding (not a gap)
A humanities publisher with perfect metadata everywhere else caps at 80/100.
Why this is hard to fix with current data
- No discipline signal at the member level. The Crossref Member API doesn't include subject classifications. Journals have optional ASJC codes, but publishers often span multiple disciplines.
- No "unfunded" assertion in the Crossref schema. Publishers can deposit funder info when it exists, but can't declare "no funding received."
- Per-content-type breakdowns help but don't solve it. We can see that book chapters score differently from journal articles, but a philosophy monograph and a biomedical monograph get the same treatment.
This is structurally similar to #4 (ORCID penalizing institutional authors) — the scoring assumes a model of individual, funded academic authorship that doesn't hold across all scholarly publishing.
Possible approaches
- Content-type-aware weight adjustment — Reduce funding weight for content types structurally less likely to be funded (monographs, book chapters, edited books), redistribute points to other dimensions.
- Threshold-based normalization — If funding coverage is near-zero (<5%) across all content types, treat the dimension as N/A and normalize the score out of 80.
- External enrichment — Use OpenAlex or ASJC subject classifications at the journal level to infer discipline mix and adjust weights accordingly.
- Methodology note — At minimum, document this limitation on the site.
Related
References
Source
Raised by Andy Byers on LinkedIn
Problem
The funding dimension (20/100 points) assumes all published research has external funding. This systematically disadvantages humanities, arts, and social science publishers whose content is legitimately unfunded.
Crossref's coverage API reports "X% of DOIs have funder metadata" but makes no distinction between:
A humanities publisher with perfect metadata everywhere else caps at 80/100.
Why this is hard to fix with current data
This is structurally similar to #4 (ORCID penalizing institutional authors) — the scoring assumes a model of individual, funded academic authorship that doesn't hold across all scholarly publishing.
Possible approaches
Related
References
Source
Raised by Andy Byers on LinkedIn