THE BRIEFING
A protein that doesn't exist in nature - designed computationally by a Nobel laureate's lab - can now grow inside living cells and record their history as fluorescent rings, readable under a microscope. Yes, you read that right.
Meanwhile, The FDA just approved the first AI imaging device for use during breast cancer surgery, with a regulatory pathway that lets the AI improve itself after approval.
Insilico Medicine and Liquid AI shipped a drug discovery model small enough to run entirely behind pharma's own firewall. And a new platform launched where AI agents hold wallets, fund experiments, and commission wet lab work on their own.
In this issue we also preview NVIDIA's GTC AI × biology track, and highlight some of the sessions worth watching for BAIO readers.
Let's dive in.
AD
Become An AI Expert In Just 5 Minutes
If you’re a decision maker at your company, you need to be on the bleeding edge of, well, everything. But before you go signing up for seminars, conferences, lunch ‘n learns, and all that jazz, just know there’s a far better (and simpler) way: Subscribing to The Deep View.
This daily newsletter condenses everything you need to know about the latest and greatest AI developments into a 5-minute read. Squeeze it into your morning coffee break and before you know it, you’ll be an expert too.
Subscribe right here. It’s totally free, wildly informative, and trusted by 600,000+ readers at Google, Meta, Microsoft, and beyond.
NEWS
A drug discovery AI small enough to keep your data private

Pharma companies face an awkward trade-off: the best AI models sit in someone else's cloud, but sending proprietary molecules to external servers is a regulatory and IP problem. Insilico Medicine and MIT-founded Liquid AI say you shouldn't have to make that trade-off. Their new model, LFM2-2.6B-MMAI, runs entirely on a company's own infrastructure at 2.6 billion parameters - roughly a tenth the size of comparable systems - while handling property prediction, molecular optimization, retrosynthesis, and toxicity screening in one system.
On public benchmarks in an ICLR 2026 workshop paper, it beat Google's TxGemma-27B on 13 of 22 pharmacokinetics and toxicology tasks and hit 98.8% success on multi-property molecular optimization. Insilico also claims it outperformed GPT-5.1 and Claude Opus 4.5 on an internal affinity benchmark.
Why it matters: If a small, purpose-trained model can genuinely match the big ones on drug discovery tasks, pharma gets AI capability without sending data off-site. That trade-off disappearing is a bigger deal than another leaderboard score.
Did you know? MMAI Gym, the training platform behind the model, is being offered as a membership program for pharma and biotech companies to fine-tune their own models on pharmaceutical tasks. Also, Liquid AI is hiring. So is Insilico.
NEWS
A computationally designed protein records cellular history like tree rings

We asked Nano Banana 2 to make an artistic interpretation of the figures in the Gemini paper.
A computationally designed protein can now record what a cell experiences over time - capturing activity as fluorescent ring patterns, like tree rings, readable under a microscope. The protein, called GEMINI, doesn't exist in nature. David Baker's group at the University of Washington designed it, and Dingchang Lin's lab at Johns Hopkins built it into a recording system that captures events as close as 15 minutes apart with hour-level accuracy. The method has just been published in Nature (the open preprint can be found here).
In a tumor xenograft, GEMINI recorded inflammation at single-cell resolution, revealing that response timing varied with local vascular density. In the mouse brain, it captured seizure-induced neural activation with minimal impact on behavior or memory. The authors note that scaling readout to whole organs will require computer vision and deep learning to decode patterns automatically.
Why it matters: Biology's tools give snapshots - what a cell looks like now. GEMINI records trajectories - what each cell experienced over time. If AI-based decoding catches up, this could become the temporal data layer that virtual cell models currently lack.
Did you know? David Baker shared the 2024 Nobel Prize in Chemistry for computational protein design. GEMINI's plasmids are on Addgene and all analysis code is open-source on GitHub.
NEWS
First AI imaging device approved for breast cancer surgery

Perimeter Claire.
The FDA just approved the first AI-powered imaging device for use during breast cancer surgery. Claire, built by Perimeter, combines optical coherence tomography - an imaging technique best known from eye exams, here applied to tissue at roughly 10 times the resolution of surgical X-ray or ultrasound - with an AI engine trained on millions of proprietary tissue images. During surgery, the system scans excised tissue in real time and flags areas where cancer may remain at the margins.
In its pivotal trial, Claire achieved 88.1% margin accuracy and a statistically significant reduction in residual cancer compared to standard care alone.
The regulatory detail worth watching: Claire's approval includes a predetermined change control plan, which lets Perimeter push AI updates without full FDA re-review. Every procedure generates new training data, feeding a continuous improvement loop.
Why it matters: About one in five breast cancer patients currently needs repeat surgery because margins weren't clear. Claire targets roughly 300,000 annual U.S. breast-conserving surgeries. Nationwide rollout begins in the coming weeks.
Did you know? Claire is one of only a handful of AI-enabled Class III medical devices in the U.S. - the highest-risk category, requiring the most rigorous FDA review. Perimeter is a Toronto-based company established in 2013, with U.S. headquarters in Dallas, Texas. The company is hiring.
NEWS
A social network where AI agents fund and run their own science

Bio Protocol's Paul Kohlhaas launched Science Beach. It’s a platform where AI agents and human researchers publish scientific hypotheses, critique each other's work, and fund experiments through on-chain payment rails. Agents built on the OpenClaw framework can hold wallets, pay per query, and collect rewards when their work produces results. The architecture connects to cloud labs, so a hypothesis generated computationally can trigger a physical experiment without human intervention in the loop.
The technical foundation is ClawdLab, detailed in a February arXiv preprint. The paper traces ClawdLab's origins to a harder lesson: a literature review of the OpenClaw and Moltbook ecosystem documented persistent security vulnerabilities across 131 agent skills and over 15,200 exposed control panels.
ClawdLab redesigns the architecture around hard role restrictions, PI-led governance, and adversarial peer review encoded as protocol constraints rather than social consensus - an attempt to make scientific quality a structural property, not a moderation problem.
Why it matters: Most AI scientist platforms run single-agent pipelines or predetermined workflows. What Science Beach is attempting - fully decentralised agents that self-organise, fund each other, and commission wet lab work - is genuinely different in kind. Whether it produces real science or sophisticated-looking slop is an open question the team is asking publicly.
Did you know? A $2,500 prize contest for agent-generated hypotheses is running until March 13. Criteria: novelty, testability, and grounded citations. Any OpenClaw agent can enter.
UPCOMING
NVIDIA GTC brings the AI × bio stack to San Jose

NVIDIA GTC 2026 runs March 16–19 in San Jose with a dedicated pharma and biotech track. Sessions worth watching for BAIO readers: Bo Wang (Xaira) on why scaling laws in biology are data-limited, not compute-limited; Andrew White (Edison Scientific) on building AI scientists that compress discovery from six months to one day; Genentech's head of computation John Marioni and Harvard's Marinka Zitnik on where AI fits in the scientific method; and a self-driving labs session featuring HighRes Biosolutions, Thermo Fisher, and Multiply Labs. Hands-on workshops cover BioNeMo, single-cell analysis with RAPIDS, and deploying foundation models for drug discovery. Jensen Huang keynotes March 16. Free virtual attendance.
THE EDGE
Drug discovery runs thousands of biological tests. OpenPheno, an AI model described in a preprint from Shanghai Jiao Tong University and Harvard Medical School, asks whether you could replace most of them with one image and a question. Feed it a microscopy image of how a compound changes cell appearance, its chemical structure, and a plain-language description of the biological effect you're interested in - and it predicts whether the compound will be active, without ever having seen that specific test before. On 54 entirely new tests, it outperformed models that had been specifically trained on those tests with full data. You can find code and datasets here.
Until next time,
Peter at BAIO



