ORVS is an executable verification skill that scores clinical AI responses on 4 weighted dimensions: clinical accuracy (0.30), safety and red-flag detection (0.
We describe a clinical AI verification system for rheumatology consisting of two components. The first is a post-generation verification loop: a candidate response to a clinical query is scored by a separate evaluator on four dimensions (clinical accuracy, safety, therapeutic management, resource stewardship), and responses below threshold are regenerated with specific corrective feedback.
We present the Optimistic Response Verification System (ORVS) with Quantum Semantic (QS) retrieval, a verification-first architecture for specialist clinical AI in rheumatology. ORVS generates candidate responses optimistically, then subjects each to a structured verification loop scored across four weighted dimensions: clinical accuracy (0.
Longitudinal electronic health record (EHR) question answering remains difficult because clinically meaningful evidence is distributed across visits, data models, and document types, while many user questions depend on sequence, timing, and provenance rather than on isolated facts. Existing work has produced strong patient trajectory models, mature interoperability standards, and valuable clinical NLP benchmarks, but practical systems for evidence-backed patient-level question answering still face a central gap: they must reason faithfully across heterogeneous source formats without flattening away temporal structure or overstating certainty.
Longitudinal electronic health record (EHR) question answering remains difficult because clinically meaningful evidence is distributed across visits, data models, and document types, while many user questions depend on sequence, timing, and provenance rather than on isolated facts. Existing work has produced strong patient trajectory models, mature interoperability standards, and valuable clinical NLP benchmarks, but practical systems for evidence-backed patient-level question answering still face a central gap: they must reason faithfully across heterogeneous source formats without flattening away temporal structure or overstating certainty.
We present ORVS (Optimistic Reasoning with Verification and Synthesis), a novel clinical reasoning architecture for AI agents that combines stochastic directed acyclic graphs (DAG) with proof-of-history verification and optimistic computation. Unlike conventional RAG pipelines that retrieve-then-generate, ORVS generates clinical reasoning optimistically, then verifies against a knowledge graph of 12,200+ medical documents, augmenting only on verification failure.
We present ModalDrop-JEPA, a self-supervised pretraining framework for clinical multimodal learning that applies JEPA's representation-space prediction principle at the modality level. Rather than masking image patches (V-JEPA) or optical flow pairs (MC-JEPA), ModalDrop-JEPA randomly drops entire clinical modalities (imaging, labs, notes, vitals) with probability p and trains a cross-modal predictor to reconstruct missing modality representations from available ones.
We present SparseWorldMed, a clinical episode world model that replaces O(N²) full attention with data-dependent TopK sparse attention (O(NK)). Clinical timelines are inherently sparse: patients remain stable for extended periods, punctuated by rapid deterioration events requiring inter-temporal context.