Skip to content

Machine-readable Repository to Identify Trustworthy Science

License

Notifications You must be signed in to change notification settings

Innovation-Sprint-2021/MERITS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MERITS

(Metaresearch Evaluation Repository to Identify Trustworthy Science)

Hi! Thanks for dropping by and welcome to project MERITS. We're a group of volunteer researchers and developers who are trying to improve the way research outputs are evaluated and rewarded in academia.

Vision statement

We're building a tool to help people find ratings that researchers have given to articles so that we can improve their visibility, support meta-research and foster innovation in the scholarly evaluation space.

Project description

MERITS aims to collect ratings of academic research outputs (e.g., preprints, journal articles) and consolidate them into a single database. This can be ratings from existing platforms/servers (e.g., PREreview, Plaudits, Sciety), or potentially researcher-submitted ratings.

Why MERITS?

The dominant heuristics for quality in academia — journal impact factor (JIF) and citations — are deeply problematic and do not measure the full range of qualities we might be interested in as researchers. To combat this, there has been growing support over the past decade for post-publication peer review and rating, with a number of platforms (e.g., PREreview) now collecting ratings on individual articles using a range of dimensions (e.g., novelty, importance, replicability). MERIT tackles several outstanding issues within this space:

(1) Ratings exist in silos with limited interoperability between platforms—it's hard for readers to find these ratings when and where they exist.

(2) Most ratings have been chosen ad-hoc, pointing to a need for more meta-research on which ratings are relevant to determining output quality.

(3) Only a small minority of researchers are contributing ratings—we aim to increase the visibility of this practice and show its value to the scientific community.

How to use MERITS

Current functions. MERITS currently enables:

  • Import of preprint rating data from PREreview & Plaudits based on known DOIs
  • Organisation of ratings based on defined classes
  • Database search of ratings based on the data classes (e.g., search by article DOI, rating scale, review platform)

Roadmap

Will be updated soon.

Contributor guidelines

We welcome contributions to the project!

Please see our Contribution Guidelines.

Contributors to the MERITS community should also read our Code of Conduct.

Credits

MERITS is a volunteer-led initiative supported by the eLife Innovation Sprint 2021.

Its contributors are (alphabetical order by surname):

  • Cassio Amorim
  • Lonni Besançon
  • Alexander Hertwig @herwix
  • Dawn Holford @dlholf
  • Paola Masuzzo
  • Allan Ochola
  • Cooper Smout @coopersmout
  • Gabriel Stein
  • Naosuki Sunami
  • Aaron Willcox

Licensing

Please see MERIT's licensing for use of the project.

About

Machine-readable Repository to Identify Trustworthy Science

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages