THE BRIEFING
A world model for the heart, BAIO sure likes the ultrasound of that! Bo Wang's lab trained EchoJEPA on 18 million echocardiograms - the largest cardiac ultrasound dataset ever - and it generalizes to pediatric patients it never saw.
Takeda keeps betting big on AI drug design with a $1.7B Iambic deal and the DOE is building self-driving biology labs to generate the training data that bio-AI has been missing.
Also: UC Irvine mapped which genes actually cause which in Alzheimer's brains - not just correlations, but direction.
Let's dive in.
NEWS
World model EchoJEPA learns heart anatomy from 18M scans

Cardiovascular disease is the leading cause of death worldwide. Bo Wang's lab at University of Toronto asked a simple question: “what if AI learned the heart, not the ultrasound machine?” The result is EchoJEPA - what Wang describes as a world model for echocardiography.
Trained on 18 million echocardiograms from 300,000 patients (the largest pretraining dataset for this modality to date), the model uses a latent predictive approach inspired by Yann LeCun's JEPA architecture: instead of reconstructing every pixel, it learns anatomical representations while ignoring the noise and artifacts that make ultrasound data messy.
Bo Wang commented on X: “Yann LeCun's vision: machines should learn like humans - by building internal world models, not reconstructing every pixel. We just validated this idea at the largest scale ever attempted in cardiac ultrasound.”
The results are strong across the board: ~20% better at estimating how well the heart pumps, and 17% better at estimating pressure in the right ventricle. It needs almost no labeled data - 79% accuracy with just 1% of labels, versus 42% for the best baseline using all of them. It's also unusually robust to noisy input. But the standout result? When tested on pediatric patients it had never seen during training, it beat models that were specifically fine-tuned for that task.
Bryan Johnson chimed in on X that it would have been useful during his years of using ultrasound to track biological signals. Wang responded that EchoJEPA could be used to identify biomarkers for heart function and define more accurate health clocks.
Why it matters: Medical imaging has been waiting for foundation models that generalize beyond their training data. EchoJEPA suggests that learning anatomical structure rather than reconstructing raw pixels is the right paradigm for clinical video - and the pediatric zero-shot result is the strongest evidence for that claim.
Did you know? EchoJEPA's code and pretrained model weights are open source.
NEWS
Takeda bets $1.7B on Iambic's AI drug design

Iambic Therapeutics signed a multi-year collaboration with Takeda worth over $1.7 billion in potential milestones - one of the largest AI drug discovery deals to date. Iambic will apply its NeuralPLexer (predicts 3D protein-ligand structures) and Enchant (multimodal transformer for clinical endpoint prediction) to small molecule programs in oncology and GI/inflammation.
Iambic claims its approach can compress the traditional 6-year path to clinical trials to under 2 years. The San Diego-based company, founded in 2020, has raised over $300 million from investors including Nvidia and Ark.
Why it matters: Takeda is now partnering with both Iambic ($1.7B) and Nabla Bio ($1B+) on AI-driven discovery. The Japanese pharma giant is making a serious bet that AI platforms can deliver.
Did you know? Iambic is hiring across computational science, machine learning, and drug discovery roles in San Diego.
NEWS
DOE launches OPAL to build autonomous AI biofoundries

A still from Argonne National Laboratory, which is part of OPAL.
The Department of Energy launched OPAL, a project to build general-purpose biology foundation models that can autonomously plan, execute, and learn from experiments.
The core problem: Bio-AI has lagged because biological datasets are limited and inconsistent - unlike text for LLMs.
OPAL's answer is to generate that data at scale. The project links fully automated robotic facilities with AI agents that ingest multi-modal data (omics, imaging, physiology, genomics). The goal is shrinking experiment cycles from weeks to days.
Why it matters: If self-driving labs can generate the training data that bio-AI needs, they could break the data bottleneck that's slowed the field.
Did you know? The project is part of DOE's Genesis Mission, which has a 240-day deadline (July 2026) to demonstrate working capabilities.
NEWS
SIGNET maps causal gene networks in Alzheimer's brains

Co-corresponding authors Min Zhang (left) and Dabao Zhang.
UC Irvine researchers built the first cell-type-specific causal gene regulatory maps for Alzheimer's disease. Their machine learning framework, SIGNET, moves beyond correlation to reveal cause-and-effect relationships - including feedback loops - by integrating single-cell RNA sequencing with whole-genome data from 272 participants.
The finding: excitatory neurons undergo the most dramatic rewiring as the disease progresses. Nearly 6,000 causal interactions were identified in these cells alone. The team also pinpointed hundreds of "hub genes" that control many others - potential targets for intervention. APP, the gene encoding amyloid precursor protein, was found to regulate genes specifically in inhibitory neurons.
Why it matters: Correlation tells you genes move together. Causation tells you which ones to target. SIGNET's hub genes - the ones that regulate many others - are candidates for intervention.
Did you know? SIGNET is designed to generalize. The authors say it can be applied to cancer, autoimmune disorders, and mental health conditions.
THE EDGE
Benchling - the electronic lab notebook that's become default R&D infrastructure at many biotechs - just made its AI features generally available after a year in preview. Over 500 companies are now using it.
The platform embeds AI agents for scientific workflows (literature search, data entry, deep research) and integrates models like AlphaFold 2, Chai-1, and Boltz-2 directly where scientists already work.
Worth reading: Benchling's 2026 Biotech AI Report surveyed how the industry is actually using AI. Key numbers: 76% adoption for lit review, 71% for structure prediction, and half of companies report faster time-to-target. The main blocker? Data quality. R&D systems weren't built for AI - and that's where most teams are stuck.
Until next time,
Peter at BAIO
