In partnership with

THE BRIEFING

This issue got me thinking about questions more than answers.

Are we getting the full stack? Today we talk about a model of the brain, an agent that delivers lab-ready antibody candidates, a pharma deal on aging, a workspace for cellular biology. AI is starting to touch every level of biology and biotech.

Who builds the biology layer? Meta, Anthropic, a DeepMind veteran, and Lilly are all making major moves in the same week. The question isn't whether AI × bio happens - it's who gets to shape it.

Will the human still matter? Read on to find the tale where an agent made a mistake and a biologist caught it. The tools are getting powerful. The question of when to trust them is getting harder.

And is AI × bio becoming a product category? Lilly is buying commercialization rights. Anthropic is, from the looks of it, building a full-fledged mode.

Let's dive in.

1,000+ Proven ChatGPT Prompts That Help You Work 10X Faster

ChatGPT is insanely powerful.

But most people waste 90% of its potential by using it like Google.

These 1,000+ proven ChatGPT prompts fix that and help you work 10X faster.

Sign up for Superhuman AI and get:

  • 1,000+ ready-to-use prompts to solve problems in minutes instead of hours—tested & used by 1M+ professionals

  • Superhuman AI newsletter (3 min daily) so you keep learning new AI tools & tutorials to stay ahead in your career—the prompts are just the beginning

NEWS
Eli Lilly signs $2.75 billion deal with AI drug developer Insilico Medicine

Alex Zhavoronkov is the CEO of Insilico Medicine. Credit: Insilico

Eli Lilly signed a deal with Insilico Medicine worth up to $2.75 billion - one of the largest single AI drug discovery agreement to date. Insilico receives $115 million upfront, with the rest tied to development, regulatory, and commercial milestones plus royalties. According to Reuters, Lilly gets exclusive rights to develop and commercialize preclinical oral drug candidates from Insilico's AI platform for selected disease areas.

The deal deepens a relationship BAIO has tracked since Issue 3, when Lilly and Insilico published their “Prompt-to-Drug” framework alongside a $100 million-plus research agreement. That was already part of Lilly's broader AI build-out - a $1 billion NVIDIA co-innovation lab and a $1.75 billion deal with Isomorphic Labs.

Insilico has developed at least 28 drugs using generative AI, with nearly half at clinical stage, according to founder and CEO Alex Zhavoronkov. The company's rentosertib - widely cited as the first AI-generated drug to show clinical efficacy in a peer-reviewed human trial - published Phase 2a results in Nature Medicine last year. Insilico's shares are up more than 50% year-to-date.

Why it matters: Insilico isn't just an AI drug company. Founder Alex Zhavoronkov has spoken publicly about choosing drug targets that score across multiple hallmarks of aging - then getting them approved for specific age-related diseases. Rentosertib's target, idiopathic pulmonary fibrosis, has an average onset around 65. Lilly, meanwhile, stated at the ARDD 2025 aging conference that longevity is part of its corporate strategy. A $2.75 billion deal between these two may signal that aging-adjacent drug discovery is entering the pharma mainstream.

Did you know? The Financial Times reported that the deal includes a GLP-1 drug for diabetes - potentially putting Insilico-designed molecules into the same therapeutic category as Lilly's blockbuster Mounjaro. Zhavoronkov declined to confirm which specific assets were licensed. Insilico is hiring.

NEWS
Meta built a digital model of the human brain - for “in-silico neuroscience”

Meta's FAIR team released TRIBE v2 - a model trained on over 1,000 hours of brain scans from 720 people watching movies and listening to podcasts. Give it any video, audio, or text, and it predicts how the brain will respond. The team calls it a tool for “in-silico neuroscience” - running brain experiments on a computer instead of in a scanner.

The headline result: when tested on people it has never seen, TRIBE v2's predictions of how a typical brain responds to a stimulus were more accurate than adding one more person to the scanner. Individual brain scans are noisy - attention drifts, signal quality varies. The model, trained on hundreds of brains, cuts through that noise and estimates the average response more precisely. That makes it useful for research - understanding how healthy brains process information - though not yet for diagnosing what's wrong with a specific patient's brain.

In virtual replications of classic experiments - showing faces, playing sentences, contrasting emotional versus physical pain - TRIBE v2 correctly identified which brain regions responded, recovering findings that took decades of scanner work to establish.

The paper is candid about limits. The model only predicts how the brain reacts to things it sees and hears - not what happens when you make a decision, move your hand, or have a thought. And applying it to neurological disorders is a stated future goal, not a current capability.

Why it matters: Every fMRI session costs hundreds of dollars and requires a dedicated facility. A model that can pre-screen how brains respond to a stimulus - before anyone gets in the scanner - changes the speed and economics of neuroscience research. Performance scales with data and shows no sign of plateauing.

Did you know? TRIBE v2's code, weights, and an interactive demo are all publicly available. It builds on three open AI models for video, audio, and language - two from Meta and one from Google.

NEWS
An AlphaFold veteran's AI agent designs lab-validated antibodies from a text prompt

You can talk to Latent-Y in plain English. Credit: Latent Labs

Simon Kohl, who co-led DeepMind's protein design team and co-developed AlphaFold2, launched Latent-Y - an autonomous agent that takes a therapeutic goal in plain English and delivers lab-ready antibody candidates. “Think of it like Deep Research or OpenClaw, but instead of a report at the end, you get lab-confirmed drug candidates,” Kohl wrote on X.

The agent reads scientific literature, identifies where on a target an antibody could attach, designs candidates, and iterates - “all without being told how.” Across nine targets in a technical report, it produced lab-confirmed binders against six, with binding strength in the single-digit nanomolar range - nanomolar affinities are typical for approved drugs.

In a broader benchmark, the agent was given 21 scientific papers as input and correctly identified the relevant target in every case. Binding was confirmed by SPR, a standard lab measurement.

In one campaign, the agent was asked to design antibodies that bind the same target in both the human and monkey versions of a cancer-related protein. It predicted the monkey protein's structure from its DNA sequence, then wrote custom code to design against both species at once - a capability it wasn't built with. But it also made a mistake: some designs latched onto the inside of the protein, where a real antibody could never physically reach. A human researcher spotted the problem, the agent corrected itself, and three of the final designs worked in the lab. It's an honest picture of what 'autonomous' looks like right now - powerful, but not independent of human judgment.

Latent Labs, which Kohl founded in 2023, has $50 million in total funding.

Why it matters: BAIO has covered AI agents that design experiments (Eubiota, Issue 5) and run them (LabOS, Issue 5). Latent-Y adds another claim: an agent that designs the drug itself. All results are self-reported though - independent validation is the next hurdle.

Did you know? Latent Labs is open-sourcing the sequences and structures of its top lab-validated binders at platform.latentlabs.com. The agent also supports peptide and mini-binder design.

NEWS
Anthropic appears to be building a dedicated biology mode for Claude

A new mode called “Operon” has been spotted in the Claude desktop app by TestingCatalog, which monitors AI apps for unreleased features. Anthropic hasn't confirmed it. But what's visible suggests a standalone workspace designed specifically for computational biology - not a plugin or a connector, but a fourth mode alongside Chat, Code, and Cowork.

The welcome screen pitches four starter tasks that read like a computational biologist's to-do list: mapping evolutionary relationships between species, designing gene-disabling experiments with CRISPR, analyzing gene activity in individual cells, and ranking protein variants using AI models. Users can create projects with persistent sessions, grant the system access to local files - useful for researchers sitting on large institutional datasets - and switch between planning and execution modes.

The name is a nod: in molecular biology, an operon is a set of genes that get switched on together. Anthropic has been moving in this direction for months. Claude for Life Sciences launched in October 2025, connecting to platforms like PubMed and Benchling. Claude for Healthcare followed in January. The company launched a science blog in March and announced research partnerships with the Allen Institute and Howard Hughes Medical Institute. A purpose-built biology workspace fits the trajectory - but remains unannounced.

Why it matters: Google has Co-Scientist. OpenAI just pledged $1 billion for life sciences through its Foundation. If Operon ships, Anthropic would be the first major AI company to offer computational biology not as a feature but as a dedicated product mode.

Did you know? Anthropic's AI for Science program offers free API credits to academic biology researchers.

THE EDGE

Asimov Press published a step-by-step guide to designing antibodies on a computer. It walks the reader through a real campaign against Nipah virus - from choosing a crystal structure in the Protein Data Bank, to trimming it in PyMOL, to running BoltzGen (from the Boltz-2 team) on a web platform called Ariax, to scoring and filtering the results. Only recently, none of these tools could reliably design antibodies computationally. Now several can - and this guide shows you how.

ON OUR RADAR

Until next time,
Peter at BAIO

Keep Reading