Written Evaluation guidelines
Please aim to write a report up to the standards of a high-quality referee report for a traditional journal. Consider standard guidelines as well as The Unjournal's emphases. Remember to address any specific considerations mentioned by the evaluation manager, including in our bespoke evaluation notes.
Please provide a concise summary of your evaluation below. Otherwise, please write your evaluation here, provide a link to it, or let us know you have emailed it.
If you are linking or sending a file, we prefer the 'native format' (word, latex, markdown, bibtex, etc.) and not a pdf (but if necessary we can handle a pdf).
Accepted: Google Docs, PubPub, Notion, Dropbox, or any public URL.
Click to choose a file, or drag and drop
Accepts: .pdf, .docx, .txt, .md, .html
Potential Evaluation Tools / Guidelines (expand to see AI & analysis tools — AI use policy)
As of January 2025, this is an incomplete list of tools we've tried or are considering. We aim to make this a more carefully curated and vetted list.
View full tools guide →
Percentile Metrics (0-100 scale) guidelines
Tip: Use the Calibrate button above to practice rating sample papers and check your calibration.
Overall Assessment
Guidance
Judge the quality of the research heuristically. Consider all aspects of quality, credibility, importance to future impactful applied research, and practical relevance and usefulness, importance to knowledge production, and importance to practice.
Benchmark: serious research in the same area encountered in the last three years.
Claims, Strength & Characterization of Evidence
Guidance
Do the authors do a good job of:
- Stating their main questions and claims clearly?
- Providing strong evidence and powerful approaches to inform these?
- Correctly characterizing the nature of their evidence?
Methods: Justification, Reasonableness, Validity, Robustness
Guidance
Consider the following:
- Are the methods well-justified and explained?
- Are they a reasonable approach to answering the question(s) in this context?
- Are the underlying assumptions reasonable?
- Are the results and methods likely to be robust to reasonable changes in assumptions?
- Does the author demonstrate robustness?
- Did the authors take steps to reduce bias from opportunistic reporting and questionable research practices?
Advancing Knowledge and Practice
Guidance
To what extent does the project contribute to the field or to practice, particularly in ways relevant to global priorities and impactful interventions?
- Focus on "improvements that are actually helpful" (applied stream)
- Originality and cleverness should be weighted less than typical journals — we focus on impact
- More weight on "contribution to global priorities" than "contribution to academic field"
- Do the paper's insights inform beliefs about important parameters and intervention effectiveness?
- Does the project add useful value to other impactful research?
- Sound, well-presented null results can also be valuable
Logic and Communication
Guidance
- Are goals and questions clearly expressed?
- Are concepts clearly defined and referenced?
- Is the reasoning transparent? Assumptions explicit?
- Are all logical steps clear and correct?
- Does the writing make arguments easy to follow?
- Are conclusions consistent with the evidence presented?
- Do authors accurately characterize evidence and its support for main claims?
- Are data and analysis relevant to the arguments?
- Are tables, graphs, diagrams easy to understand (no major labeling errors)?
Open, Collaborative, Replicable Research
Guidance
This covers several considerations:
Replicability, reproducibility, data integrity: Would another researcher be able to perform the same analysis and get the same results? Are methods explained clearly enough for credible replication? Is code provided? Is data source clear and as available as reasonably possible?
Consistency: Do numbers in the paper and code output make sense? Are they internally consistent throughout?
Useful building blocks: Do authors provide tools, resources, data, and outputs that might enable future work and meta-analysis?
Reference: COS TOP Guidelines — a framework for evaluating transparency across 8 dimensions including data, code, materials, and preregistration.
Relevance to Global Priorities
Guidance
- Is the topic and approach useful to global priorities, cause prioritization, and high-impact interventions?
- Does the paper consider real-world relevance, policy, and implementation questions?
- Are the setup, assumptions, and focus realistic?
- Do authors report results relevant to practitioners?
- Do they provide useful quantified estimates (costs, benefits) for impact quantification?
- Do they communicate in ways policymakers can understand without misleading oversimplification?
Journal Tier Ratings (0.0-5.0 scale) guidelines
Tier "Should" — Normative Merit
Guidance
Where should this paper be published based on merit alone? Imagine a journal process that is fair, unbiased, and free of noise — where status, connections, and lobbying don't matter.
Non-integer scores encouraged (e.g., 4.6, 2.2).
Tier "Will" — Prediction
Guidance
Where will this research actually be published? If already published and you know where, report the prediction you would have given absent that knowledge.
Non-integer scores encouraged (e.g., 4.6, 2.2).
Claim Identification, Assessment, & Implications (optional but rewarded) guidelines
This section is meant to help practitioners use this research to inform their funding, policymaking, and other decisions. It is not intended as a metric to judge the research quality per se.
This is mainly relevant for empirical research. If 'claim assessment' does not make sense for this paper, please consult the evaluation manager, or skip this section.
Overall Summary
We generally incorporate this into the 'abstract' of your evaluation (see examples at unjournal.pubpub.org).
Confidential Comments
Your comments here will not be public or seen by authors. Please use this section only for comments that are personal/sensitive in nature. Please place most of your evaluation in the public section.
AI/LLM Tool Usage policy
Please disclose your use of AI/LLM tools in this evaluation. AI tools may be used for specific tasks (literature search, methodology checks, writing assistance) but not for generating overall evaluations or ratings. You must independently verify any AI-generated suggestions. See recommended tools →
Survey Questions guidelines
Responses to these will be public unless you mention in your response that you want us to keep them private.
Feedback (responses below will not be public or seen by authors)
See bit.ly/UJevalcollab. It will come with some additional compensation. If you and other evaluators are interested, we may follow up to arrange an (anonymous) discussion space or synchronous meetings.
To be contacted for compensated evaluation work when research comes up in your area. To expedite this, fill out the EOI form at Join the Unjournal.