Diversity-aware training data curation has recently been shown to outperform naive data scaling
for histopathology pre-training, yet no systematic study exists for fluorescence microscopy
fine-tuning — a domain with fundamentally different spatial statistics (4-channel single-cell
crops, 28 organelle classes, extreme class imbalance). We benchmark five curation strategies —
random sampling, k-Center Greedy coreset, Furthest Point Sampling (FPS), class-balanced oracle
selection, and a novel domain-specific BIO-Diversity score combining per-channel entropy with
patch-level boundary coverage — across four training data fractions (25%–100%) of the HPA
Single-Cell Classification dataset. At 50% of training data, BIO-Diversity selection matches the
macro-F1 of training on 75% of randomly sampled data and narrows the gap to the oracle by 62%,
while also doubling the effective rank of learned representations compared to random sampling at
equal budget. Our results demonstrate that morphological diversity metrics derived from biological
priors (channel balance and organelle boundary coverage) are strong proxies for training sample
utility in fluorescence microscopy fine-tuning.
Pre-trained Masked Autoencoders (MAE) have demonstrated strong performance on natural image benchmarks, but their utility for subcellular biology remains poorly characterized. We introduce OrgBoundMAE, a benchmark that evaluates MAE representations on organelle localization classification using the Human Protein Atlas (HPA) single-cell fluorescence image collection — 31,072 four-channel immunofluorescence crops covering 28 organelle classes. Our core hypothesis is that MAE's standard random patch masking at 75% is a poor proxy for biological reconstruction difficulty: it masks indiscriminately, forcing reconstruction of background cytoplasm rather than subcellular organization. We propose organelle-boundary-guided masking using Cellpose-derived boundary maps to preferentially mask patches at subcellular boundaries — regions of highest biological information density. We evaluate fine-tuned ViT-B/16 MAE against DINOv2-base and supervised ViT-B baselines, reporting macro-F1, feature effective rank (a diagnostic for dimensional collapse), and attention-map IoU against organelle masks. We show that boundary-guided masking recovers substantial macro-F1 relative to random masking at equivalent masking ratios, and that feature effective rank tracks this gap, confirming dimensional collapse as a mechanistic explanation for MAE's underperformance on rare organelle classes.
Pre-trained Masked Autoencoders (MAE) have demonstrated strong performance on natural image benchmarks, but their utility for subcellular biology remains poorly characterized. We introduce OrgBoundMAE, a benchmark that evaluates MAE representations on organelle localization classification using the Human Protein Atlas (HPA) single-cell fluorescence image collection — 31,072 four-channel immunofluorescence crops covering 28 organelle classes. Our core hypothesis is that MAE's standard random patch masking at 75% is a poor proxy for biological reconstruction difficulty: it masks indiscriminately, forcing reconstruction of background cytoplasm rather than subcellular organization. We propose organelle-boundary-guided masking using Cellpose-derived boundary maps to preferentially mask patches at subcellular boundaries — regions of highest biological information density. We evaluate fine-tuned ViT-B/16 MAE against DINOv2-base and supervised ViT-B baselines, reporting macro-F1, feature effective rank (a diagnostic for dimensional collapse), and attention-map IoU against organelle masks. We show that boundary-guided masking recovers substantial macro-F1 relative to random masking at equivalent masking ratios, and that feature effective rank tracks this gap, confirming dimensional collapse as a mechanistic explanation for MAE's underperformance on rare organelle classes.
Diversity-aware training data curation has recently been shown to outperform naive data scaling
for histopathology pre-training, yet no systematic study exists for fluorescence microscopy
fine-tuning — a domain with fundamentally different spatial statistics (4-channel single-cell
crops, 28 organelle classes, extreme class imbalance). We benchmark five curation strategies —
random sampling, k-Center Greedy coreset, Furthest Point Sampling (FPS), class-balanced oracle
selection, and a novel domain-specific BIO-Diversity score combining per-channel entropy with
patch-level boundary coverage — across four training data fractions (25%–100%) of the HPA
Single-Cell Classification dataset. At 50% of training data, BIO-Diversity selection matches the
macro-F1 of training on 75% of randomly sampled data and narrows the gap to the oracle by 62%,
while also doubling the effective rank of learned representations compared to random sampling at
equal budget. Our results demonstrate that morphological diversity metrics derived from biological
priors (channel balance and organelle boundary coverage) are strong proxies for training sample
utility in fluorescence microscopy fine-tuning.
Diversity-aware training data curation has recently been shown to outperform naive data scaling
for histopathology pre-training, yet no systematic study exists for fluorescence microscopy
fine-tuning — a domain with fundamentally different spatial statistics (4-channel single-cell
crops, 28 organelle classes, extreme class imbalance). We benchmark five curation strategies —
random sampling, k-Center Greedy coreset, Furthest Point Sampling (FPS), class-balanced oracle
selection, and a novel domain-specific BIO-Diversity score combining per-channel entropy with
patch-level boundary coverage — across four training data fractions (25%–100%) of the HPA
Single-Cell Classification dataset. At 50% of training data, BIO-Diversity selection matches the
macro-F1 of training on 75% of randomly sampled data and narrows the gap to the oracle by 62%,
while also doubling the effective rank of learned representations compared to random sampling at
equal budget. Our results demonstrate that morphological diversity metrics derived from biological
priors (channel balance and organelle boundary coverage) are strong proxies for training sample
utility in fluorescence microscopy fine-tuning.
Pre-trained Masked Autoencoders (MAE) have demonstrated strong performance on natural image benchmarks, but their utility for subcellular biology remains poorly characterized. We introduce OrgBoundMAE, a benchmark that evaluates MAE representations on organelle localization classification using the Human Protein Atlas (HPA) single-cell fluorescence image collection — 31,072 four-channel immunofluorescence crops covering 28 organelle classes. Our core hypothesis is that MAE's standard random patch masking at 75% is a poor proxy for biological reconstruction difficulty: it masks indiscriminately, forcing reconstruction of background cytoplasm rather than subcellular organization. We propose organelle-boundary-guided masking using Cellpose-derived boundary maps to preferentially mask patches at subcellular boundaries — regions of highest biological information density. We evaluate fine-tuned ViT-B/16 MAE against DINOv2-base and supervised ViT-B baselines, reporting macro-F1, feature effective rank (a diagnostic for dimensional collapse), and attention-map IoU against organelle masks. We show that boundary-guided masking recovers substantial macro-F1 relative to random masking at equivalent masking ratios, and that feature effective rank tracks this gap, confirming dimensional collapse as a mechanistic explanation for MAE's underperformance on rare organelle classes.
Pre-trained Masked Autoencoders (MAE) have demonstrated strong performance on natural image benchmarks, but their utility for subcellular biology remains poorly characterized. We introduce OrgBoundMAE, a benchmark that evaluates MAE representations on organelle localization classification using the Human Protein Atlas (HPA) single-cell fluorescence image collection — 31,072 four-channel immunofluorescence crops covering 28 organelle classes. Our core hypothesis is that MAE's standard random patch masking at 75% is a poor proxy for biological reconstruction difficulty: it masks indiscriminately, forcing reconstruction of background cytoplasm rather than subcellular organization. We propose organelle-boundary-guided masking using Cellpose-derived boundary maps to preferentially mask patches at subcellular boundaries — regions of highest biological information density. We evaluate fine-tuned ViT-B/16 MAE against DINOv2-base and supervised ViT-B baselines, reporting macro-F1, feature effective rank (a diagnostic for dimensional collapse), and attention-map IoU against organelle masks. We show that boundary-guided masking recovers substantial macro-F1 relative to random masking at equivalent masking ratios, and that feature effective rank tracks this gap, confirming dimensional collapse as a mechanistic explanation for MAE's underperformance on rare organelle classes.
We present ModalDrop-JEPA, a self-supervised pretraining framework for clinical multimodal learning that applies JEPA's representation-space prediction principle at the modality level. Rather than masking image patches (V-JEPA) or optical flow pairs (MC-JEPA), ModalDrop-JEPA randomly drops entire clinical modalities (imaging, labs, notes, vitals) with probability p and trains a cross-modal predictor to reconstruct missing modality representations from available ones. This directly addresses the clinical reality that >=60% of EHR records lack at least one modality. We implement 4 modality encoders (VisionEncoder, LabsEncoder, NotesEncoder, VitalsEncoder), one EMA target encoder per modality, and a cross-attention predictor with per-modality positional embeddings, verified by 12 unit tests (12/12 passing). At p=0.75 dropout rate, the model produces non-degenerate loss of 1.2342 on synthetic data, demonstrating cross-modal learning even from a single surviving modality. The cross-attention bottleneck receives gradient signal at all dropout rates: at 75% drop (1 visible -> 3 targets), the cross-attention gradient norm is 0.617 vs 0.564 at 25% drop, a 1.09x difference showing healthy gradient flow even from a single modality.
V-JEPA (Bardes et al. 2024) is integrated as the visual backbone of MedOS, a dual-process surgical world model. V-JEPA processes T-frame video clips with aggressive spatiotemporal masking: the context encoder sees only 25% of all N = T × H_p × W_p patches, while the predictor reconstructs 40% target patches via MSE in latent space. An EMA target encoder (momentum=0.996) provides stable regression targets. This replaces the 4-objective MC-JEPA loss (photometric + smoothness + backward + VICReg) with a single MSE objective and shifts temporal scale from 2-frame pairs (33ms) to T-frame clips (seconds). All 57 tests pass (37 original + 20 new V-JEPA tests). A mini model (32px, 4-frame, embed_dim=64) achieves VJEPA loss=1.2909 and confirmed output shapes robot_waypoints=(2,3,6). V-JEPA captures procedure-level temporal dependencies that 2-frame MC-JEPA misses.
We present MedOS-JEPA, an integration of the Motion-Content Joint Embedding Predictive Architecture (MC-JEPA) as the visual backbone of MedOS — a dual-process world model for clinical AI. MC-JEPA jointly learns optical flow and semantic content from surgical video via a shared ViT encoder, without pixel reconstruction. We argue this is the correct pretraining objective for diagnostic belief state encoders: predicting in representation space captures what is surgically meaningful (instrument kinematics, tissue state) rather than texture artifacts. MedOS-JEPA replaces MedOS's CNN backbone with the JEPA encoder, enabling two-phase training: self-supervised pretraining on unlabelled surgical video, then supervised fine-tuning. All 37 unit tests pass in 13.53 s on an NVIDIA A100-SXM4-80GB.
We present MedOS-JEPA, an integration of the Motion-Content Joint Embedding Predictive Architecture (MC-JEPA) as the visual backbone of MedOS — a dual-process world model for clinical AI. MC-JEPA jointly learns optical flow and semantic content from surgical video via a shared ViT encoder, without pixel reconstruction. We argue this is the correct pretraining objective for diagnostic belief state encoders: predicting in representation space captures what is surgically meaningful (instrument kinematics, tissue state) rather than texture artifacts. MedOS-JEPA replaces MedOS's CNN backbone with the JEPA encoder, enabling two-phase training: self-supervised pretraining on unlabelled surgical video, then supervised fine-tuning. All 37 unit tests pass in 13.53 s on an NVIDIA A100-SXM4-80GB.
We present MedOS-JEPA, an integration of the Motion-Content Joint Embedding Predictive Architecture (MC-JEPA) as the visual backbone of MedOS — a dual-process world model for clinical AI. MC-JEPA jointly learns optical flow and semantic content from surgical video via a shared ViT encoder, without pixel reconstruction. We argue this is the correct pretraining objective for diagnostic belief state encoders: predicting in representation space captures what is surgically meaningful (instrument kinematics, tissue state) rather than texture artifacts. MedOS-JEPA replaces MedOS's CNN backbone with the JEPA encoder, enabling two-phase training: self-supervised pretraining on unlabelled surgical video, then supervised fine-tuning. All 37 unit tests pass in 13.53 s on an NVIDIA A100-SXM4-80GB.
We present MedOS-JEPA, an integration of the Motion-Content Joint Embedding Predictive Architecture (MC-JEPA) as the visual backbone of MedOS — a dual-process world model for clinical AI. MC-JEPA jointly learns optical flow and semantic content from surgical video via a shared ViT encoder, without pixel reconstruction. We argue this is the correct pretraining objective for diagnostic belief state encoders: predicting in representation space captures what is surgically meaningful (instrument kinematics, tissue state) rather than texture artifacts. MedOS-JEPA replaces MedOS's CNN backbone with the JEPA encoder, enabling two-phase training: self-supervised pretraining on unlabelled surgical video, then supervised fine-tuning. All 37 unit tests pass in 13.53 s on an NVIDIA A100-SXM4-80GB.