High volume, limited intake capacity
Organizers cannot fully review every submission at intake, so weak papers and high-priority papers are often mixed together too late.
Founder-led, pre-seed infrastructure startup
We turn scientific and technical evidence into machine-readable reliability scores through a transparent flaw taxonomy. The taxonomy shows where a paper is weak, what has been checked, and what still requires human judgment. Referee supports structured triage and targeted expert review rather than replacing peer review.
Demo available today: dashboard and paper-checking workflow are live. False positives and coverage depth are still improving.
Submission volume keeps rising across conferences, journals, and publisher workflows, but intake review capacity does not scale at the same pace. Most teams still depend on fragmented checks and manual triage, which creates bottlenecks and inconsistent screening quality.
Organizers cannot fully review every submission at intake, so weak papers and high-priority papers are often mixed together too late.
Current tools typically solve one check at a time, leaving teams without a unified view of what has been evaluated and what remains open.
Traditional peer review was not designed for today's scale or threat models, and its outputs are hard to audit in structured, reusable form.
Referee is infrastructure for structured research screening and triage. We do not deliver a black-box score and we do not replace expert review. Instead, Referee organizes screening signals into a structured evaluation record that maps reliability signals to explicit flaw categories and unresolved checks.
Reliability scores are machine-readable outputs tied to concrete evidence. Each score can be inspected through supporting flaw categories and open questions that still need human judgment.
A live dashboard and paper-checking workflow are available for walkthroughs today. The platform is early: coverage breadth and false-positive handling are still being improved.
Our proprietary Common Research Weakness Enumeration (CRWE) is the core framework behind Referee's workflow.
Referee analyzes scientific and technical evidence from submission materials and creates a structured screening context.
Screening checks are mapped to explicit flaw classes so teams can see where the paper appears weak and where evidence is incomplete.
The system logs what has been checked, what remains unresolved, and which items require expert judgment before final decisions.
Outputs become machine-readable reliability signals that support intake triage and targeted human review, not autonomous acceptance or rejection.

Workflow direction: the demo already shows dashboard and paper-checking foundations, while broader coverage is still in progress.
Referee helps teams triage faster, use expert time more effectively, and maintain clear records of research risk without making claims that correctness can be fully automated.
Sort submissions by structured risk signals before expert review queues become overloaded.
Focus reviewers on unresolved technical weaknesses instead of repeating broad preliminary checks.
Keep a transparent record of what was checked, what is weak, and what still needs human decision-making.
Create structured intake triage when submission volume outpaces available reviewer capacity.
Unify fragmented integrity checks and improve visibility into what is complete versus still pending.
Run structured evaluation at scale with transparent records of potential weaknesses and unresolved risks.
Future expansion markets are below this wedge and include investment due diligence, pharma and biotech scouting, grant allocation, and enterprise R&D decisions.
Most integrity tools focus on isolated checks such as plagiarism or anomaly detection. Referee adds an evaluation layer that combines checks into a reusable, structured record.
Referee reliability signals are linked to explicit flaw categories and unresolved checks, making the output auditable instead of opaque.
Referee is built as workflow infrastructure for scholarly publishing operations, not as a manifesto or nonprofit movement layer.
The platform is architected and implemented by the founder, with core infrastructure built in Rust for reliability and performance.

Erik Schneider is the founder of Referee. He owns the vision, product roadmap, system architecture, and implementation end to end.
He defines the evaluation model, translates it into product and technical decisions, and builds the platform in Rust. The focus is practical: deliver auditable screening infrastructure that supports expert judgment in high-volume publishing workflows.
If you run conference, journal, publishing, or research-intake workflows, let's walk through the current product together. The demo includes a live dashboard and paper-checking flow, and we'll be explicit about what is ready now versus still in progress.
Or email directly: erik@referee-project.com