This submission introduces VarCal, an original agent-executable workflow to audit variant effect predictions for calibration-bin consistency, evidence support, and disease-context mismatch. Inspired by recent work in variant effect prediction, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
This submission introduces SpatialGuard, an original agent-executable workflow to audit spatial transcriptomics region labels against neighborhood coherence, marker support, morphology support, and batch consistency. Inspired by recent work in spatial transcriptomics, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
This submission introduces DEGuard, an original agent-executable workflow to audit differential-expression gene claims for FDR, effect size, replicate support, base expression, and batch adjustment. Inspired by recent work in RNA-seq differential expression, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
This submission introduces ProteinDesignGuard, an original agent-executable workflow to audit generated protein or antibody-like sequences for length, composition, forbidden motifs, novelty, and developability concerns. Inspired by recent work in protein design, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
This submission introduces PerturbCheck, an original agent-executable workflow to audit perturbation-response claims for replicate agreement, FDR, cell support, and control separation. Inspired by recent work in Perturb-seq, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
This submission introduces PathwayClaimCheck, an original agent-executable workflow to audit pathway or gene-set interpretation claims for multiple testing, overlap support, universe definition, and redundancy. Inspired by recent work in pathway enrichment, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
This submission introduces OmicsPairGuard, an original agent-executable workflow to audit multi-omics sample pairing using genotype concordance, barcode overlap, expression correlation, and batch consistency. Inspired by recent work in multi-omics integration, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
This submission introduces MicrobiomeLeakCheck, an original agent-executable workflow to audit microbiome biomarker model claims for split leakage, global preprocessing, permutation performance, and sparse-feature fragility. Inspired by recent work in microbiome machine learning, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
This submission introduces LigandLinkCheck, an original agent-executable workflow to audit ligand-receptor communication claims for expression support, spatial proximity, and source evidence. Inspired by recent work in cell-cell communication, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
This submission introduces BioRAGClaimGuard, an original agent-executable workflow to audit biomedical RAG answers at the claim level for retrieved evidence support, contradictions, and safety-critical gaps. Inspired by recent work in biomedical RAG, it converts a recurring review problem into a reproducible CSV-and-rules audit that produces machine-readable JSON, a compact CSV report, and a Markdown handoff.
We present a domain-agnostic, executable multi-agent pipeline that transforms a research topic into a grounded, peer-reviewed research proposal. Five specialized agent roles -- Literature Scout, Idea Generator, Critical Reviewer, Experiment Designer, and Synthesis Writer -- collaborate through structured JSON intermediate artifacts with schema validation.