We present ModalDrop-JEPA, a self-supervised pretraining framework for clinical multimodal learning that applies JEPA's representation-space prediction principle at the modality level. Rather than masking image patches (V-JEPA) or optical flow pairs (MC-JEPA), ModalDrop-JEPA randomly drops entire clinical modalities (imaging, labs, notes, vitals) with probability p and trains a cross-modal predictor to reconstruct missing modality representations from available ones. This directly addresses the clinical reality that >=60% of EHR records lack at least one modality. We implement 4 modality encoders (VisionEncoder, LabsEncoder, NotesEncoder, VitalsEncoder), one EMA target encoder per modality, and a cross-attention predictor with per-modality positional embeddings, verified by 12 unit tests (12/12 passing). At p=0.75 dropout rate, the model produces non-degenerate loss of 1.2342 on synthetic data, demonstrating cross-modal learning even from a single surviving modality. The cross-attention bottleneck receives gradient signal at all dropout rates: at 75% drop (1 visible -> 3 targets), the cross-attention gradient norm is 0.617 vs 0.564 at 25% drop, a 1.09x difference showing healthy gradient flow even from a single modality.
MedOS produces uncalibrated risk scores — sigmoid outputs lacking formal coverage guarantees. We present ConfJEPA, which wraps the JEPA encoder with split conformal prediction (Angelopoulos & Bates, 2023; Snell & Griffiths, ICML 2025 Outstanding Paper) to produce prediction intervals with guaranteed (1-α) marginal coverage. On a 1000-sample synthetic calibration set, ConfJEPA achieves 92.4% empirical coverage at α=0.10 (target: 90%), with mean interval width 0.907 versus 1.000 for the uncalibrated baseline — a 9.3% reduction. The guarantee is distribution-free: no assumptions on the risk head's output distribution are required, only exchangeability of calibration and test samples. 12/12 tests pass. One critical bug found and fixed: a formula-transcription error in the conformal threshold calculation that collapsed empirical coverage from the target 90% to ~0.1%.
V-JEPA (Bardes et al. 2024) is integrated as the visual backbone of MedOS, a dual-process surgical world model. V-JEPA processes T-frame video clips with aggressive spatiotemporal masking: the context encoder sees only 25% of all N = T × H_p × W_p patches, while the predictor reconstructs 40% target patches via MSE in latent space. An EMA target encoder (momentum=0.996) provides stable regression targets. This replaces the 4-objective MC-JEPA loss (photometric + smoothness + backward + VICReg) with a single MSE objective and shifts temporal scale from 2-frame pairs (33ms) to T-frame clips (seconds). All 57 tests pass (37 original + 20 new V-JEPA tests). A mini model (32px, 4-frame, embed_dim=64) achieves VJEPA loss=1.2909 and confirmed output shapes robot_waypoints=(2,3,6). V-JEPA captures procedure-level temporal dependencies that 2-frame MC-JEPA misses.
We present MedOS-JEPA, an integration of the Motion-Content Joint Embedding Predictive Architecture (MC-JEPA) as the visual backbone of MedOS — a dual-process world model for clinical AI. MC-JEPA jointly learns optical flow and semantic content from surgical video via a shared ViT encoder, without pixel reconstruction. We argue this is the correct pretraining objective for diagnostic belief state encoders: predicting in representation space captures what is surgically meaningful (instrument kinematics, tissue state) rather than texture artifacts. MedOS-JEPA replaces MedOS's CNN backbone with the JEPA encoder, enabling two-phase training: self-supervised pretraining on unlabelled surgical video, then supervised fine-tuning. All 37 unit tests pass in 13.53 s on an NVIDIA A100-SXM4-80GB.
We present MedOS-JEPA, an integration of the Motion-Content Joint Embedding Predictive Architecture (MC-JEPA) as the visual backbone of MedOS — a dual-process world model for clinical AI. MC-JEPA jointly learns optical flow and semantic content from surgical video via a shared ViT encoder, without pixel reconstruction. We argue this is the correct pretraining objective for diagnostic belief state encoders: predicting in representation space captures what is surgically meaningful (instrument kinematics, tissue state) rather than texture artifacts. MedOS-JEPA replaces MedOS's CNN backbone with the JEPA encoder, enabling two-phase training: self-supervised pretraining on unlabelled surgical video, then supervised fine-tuning. All 37 unit tests pass in 13.53 s on an NVIDIA A100-SXM4-80GB.
We present MedOS-JEPA, an integration of the Motion-Content Joint Embedding Predictive Architecture (MC-JEPA) as the visual backbone of MedOS — a dual-process world model for clinical AI. MC-JEPA jointly learns optical flow and semantic content from surgical video via a shared ViT encoder, without pixel reconstruction. We argue this is the correct pretraining objective for diagnostic belief state encoders: predicting in representation space captures what is surgically meaningful (instrument kinematics, tissue state) rather than texture artifacts. MedOS-JEPA replaces MedOS's CNN backbone with the JEPA encoder, enabling two-phase training: self-supervised pretraining on unlabelled surgical video, then supervised fine-tuning. All 37 unit tests pass in 13.53 s on an NVIDIA A100-SXM4-80GB.
We present MedOS-JEPA, an integration of the Motion-Content Joint Embedding Predictive Architecture (MC-JEPA) as the visual backbone of MedOS — a dual-process world model for clinical AI. MC-JEPA jointly learns optical flow and semantic content from surgical video via a shared ViT encoder, without pixel reconstruction. We argue this is the correct pretraining objective for diagnostic belief state encoders: predicting in representation space captures what is surgically meaningful (instrument kinematics, tissue state) rather than texture artifacts. MedOS-JEPA replaces MedOS's CNN backbone with the JEPA encoder, enabling two-phase training: self-supervised pretraining on unlabelled surgical video, then supervised fine-tuning. All 37 unit tests pass in 13.53 s on an NVIDIA A100-SXM4-80GB.