Consumer wearable biosensors generate continuous multivariate physiological time series — heart rate variability, photoplethysmography-derived SpO2, skin temperature, and accelerometry — that are shaped by a hierarchy of biological rhythms operating across timescales from minutes to weeks. Existing time-series foundation models apply generic positional encodings that are agnostic to this temporal structure, forcing the model to infer circadian and ultradian patterns from data alone and conflating pathological deviations with normal chronobiological variation. We introduce BioWaveNet, the first temporal foundation model to incorporate coupled oscillator dynamics as an architectural prior through a novel Kuramoto Circadian Positional Encoding (K-CPE) layer. BioWaveNet learns a synchronized master oscillator whose phase tracks circadian time, enabling the attention mechanism to explicitly compute within-phase and cross-phase similarity. We prove that standard sinusoidal positional encodings are a limiting degenerate case of K-CPE when inter-oscillator coupling strength K→0. Pre-trained on a curated corpus of 3.2 billion biosensor epochs spanning 847,000 person-nights from seven public datasets (MESA, NHANES, PhysioNet Apnea-ECG, SHHS, MIMIC-IV Waveforms, LifeSnaps, and PMData), BioWaveNet achieves state-of-the-art performance across four independent benchmarks: circadian phase estimation (MAE 0.28h vs. 0.71h for best baseline), disease episode detection (rhinitis, OSA, paroxysmal AF; mean AUROC 0.912), 24-hour HRV forecasting (RMSE 3.8ms vs. 6.1ms), and physiological anomaly detection (AUPRC 0.847). Critically, rhinitis-active periods, obstructive sleep apnea events, and atrial fibrillation episodes each occupy distinct, separable regions of the circadian-residual embedding space, enabling zero-shot disease fingerprinting. We release pre-trained model weights, training code, and benchmark evaluation harness.
Solar power generation depends critically on accurate short-term (minutes to hours) forecasting of global horizontal irradiance (GHI), as sudden changes cause grid instability and reduce economic viability of solar farms. Current operational forecasts achieve 20-30% MAPE (mean absolute percentage error) for 30-minute ahead forecasts, with degradation at longer horizons. This study develops a hybrid forecasting system combining persistence-based methods with machine learning ensemble models and ground-mounted sky camera imagery. The system integrates: (1) Persistence models (GHI(t+30min) ≈ GHI(t)), (2) Autoregressive models (ARIMA), (3) Machine learning ensembles (Random Forest, XGBoost, LightGBM), and (4) Computer vision analysis of cloud motion from sky cameras. We train and validate on 2 years of high-frequency irradiance data (1-minute resolution) from 15 solar sites across diverse climates (desert, temperate, subtropical). Testing 10 forecasting horizons (5, 15, 30, 60, 120, 180, 240, 360, 480, 600 minutes). Results show: Hybrid ensemble achieves 18.2% MAPE for 30-minute forecasts (vs 20.5% for ARIMA baseline), improving by 2.3 percentage points, Hybrid model recovers 94.8% of maximum theoretical forecast skill, Beyond 4 hours, all models degrade toward climatological mean (∼15% MAPE), Sky camera integration reduces RMSE by 12-15% for 15-30 minute horizons where cloud speed dominates, but provides minimal benefit beyond 2 hours. Feature importance analysis shows: irradiance history (60-minute window) is most important (32% importance), Recent rate of change (5.3% importance), Hour of day (8.1%), Clear sky index deviations (6.2%). The system adapts to seasonal patterns and cloud types. Validation on held-out 2023 data shows maintained performance. Implementation uses standard GPU inference (~50ms latency per forecast), operational without internet connectivity. Deployment to 12 utility-scale solar farms enabled 8-12% improvement in 30-minute grid balancing accuracy. This framework provides a practical, explainable forecasting solution for grid operators.
Energy grids face increasing variability from renewable sources (solar, wind) requiring flexible storage resources. Battery energy storage systems (BESS) optimize charging/discharging schedules to provide grid services: peak shaving, load leveling, frequency regulation. Traditional optimization assumes perfect forecasts; real-world scheduling must adapt to uncertain renewable generation and time-varying electricity prices. This study develops a reinforcement learning (RL) framework for real-time battery scheduling that maximizes revenue while maintaining grid stability. We train deep Q-networks (DQN) and actor-critic methods on realistic grid simulations with 1-hour resolution data from CAISO, incorporating solar/wind variability, demand profiles, wholesale prices, and ancillary service prices. The RL agent learns state-space representation: (1) current battery state-of-charge (SOC), (2) 4-hour-ahead price forecasts, (3) renewable generation forecast uncertainty, (4) frequency deviation from nominal 60Hz. Action space: charge/discharge power in 50kW increments (-200 to +200kW for 1MWh battery). Constraints: efficiency losses (90%), degradation costs, ramp rates. Simulations over 2 years (730 days) test against: (1) rule-based heuristics (charge off-peak, discharge on-peak), (2) day-ahead optimization assuming perfect forecasts, (3) myopic greedy scheduling. RL achieves 15-25% higher revenue than rule-based baselines; 5-10% better than day-ahead optimization despite imperfect forecasts. RL's adaptive advantage grows with renewable penetration (20%→40% gain under high wind/solar). Under frequency disturbances (sudden generator outages), RL provides faster frequency response (100ms) vs rule-based (5s), preventing blackout cascades. Transfer learning enables rapid deployment: pretraining on CAISO data transfers to other ISO grids with 80-90% efficiency. Multi-agent simulations show that RL-scheduled batteries reduce grid-wide costs 8-12% while improving frequency stability metrics. Real-world deployment on 2-5MW BESS systems shows sustained 12-18% revenue improvement over 1-year operation. This work demonstrates that learned, adaptive battery scheduling provides substantial grid and economic benefits beyond traditional optimization.
We present an automated 24-hour Holter ECG interpretation system for rheumatological cardiotoxicity surveillance, integrating Pan-Tompkins R-peak detection, beat classification (normal/PAC/PVC/AF), HRV analysis (SDNN, RMSSD, LF/HF, pNN50), dual QTc monitoring (Bazett/Fridericia), Bayesian change-point detection for paroxysmal arrhythmia onset, and HMM-based rhythm state tracking. The system provides drug-specific monitoring for HCQ, azithromycin combinations, and JAK inhibitors, with FHE-compatible architecture for privacy-preserving analysis.
Interstitial lung disease (ILD) is the leading cause of mortality in systemic sclerosis, dermatomyositis, and RA-ILD. HRCT pattern recognition—distinguishing UIP from NSIP—determines treatment: antifibrotics vs immunosuppression. We present a Claw4S skill for automated HRCT pattern classification using lung segmentation (threshold + morphology), texture analysis (GLCM, LBP), spatial distribution mapping, and quantitative fibrosis scoring. The tool classifies UIP vs NSIP patterns, computes percentage of affected lung volume, tracks progression across serial CTs, and screens for drug-induced ILD (methotrexate, leflunomide, anti-TNF). Fully executable with synthetic DICOM-like data. References: ATS/ERS 2013 ILD classification, Fleischner Society guidelines.