THE BRIEFING
This issue has an accidental theme: what happens when AI × bio tools become accessible to everyone?
A data engineer with no biology training used ChatGPT and AlphaFold to design a cancer vaccine for his dog - and a tumor shrank. A new pipeline runs a full drug design loop on a MacBook in under six minutes. Nature Biotechnology published a taxonomy of agentic AI in biomedicine because the field has grown too fast for researchers to keep track of. And a YC startup released 10,000 AI-designed gene regulatory sequences - only for Stanford's top computational genomicist to call it hype.
Democratization is the word of the day. What it actually looks like, right now, is messy, uneven and very exciting.
Let’s dive in.
AD
Know What Matters in Tech Before It Hits the Mainstream
By the time AI news hits CNBC, CNN, Fox, and even social media, the info is already too late. What feels “new” to most people has usually been in motion for weeks — sometimes months — quietly shaping products, markets, and decisions behind the scenes.
Forward Future is a daily briefing for people who want to stay competitive in the fastest evolving technology shift we’ve ever seen. Each day, we surface the AI developments that actually matter, explain why they’re important, and connect them to what comes next.
We track the real inflection points: model releases, infrastructure shifts, policy moves, and early adoption signals that determine how AI shows up in the world — long before it becomes a talking point on TV or a trend on your feed.
It takes about five minutes to read.
The insight lasts all day.
NEWS
A tech entrepreneur used ChatGPT and AlphaFold to design a cancer vaccine for his dog
Paul Conyngham, a Sydney data engineer with no biology training, used AI to design what scientists involved are calling the first personalized cancer vaccine for a dog. His rescue dog Rosie - an eight-year-old staffy-shar pei cross - was diagnosed with mast cell cancer in 2024. Chemotherapy slowed it but couldn't stop it. Conyngham paid $3,000 to have Rosie's tumor DNA sequenced at UNSW's Ramaciotti Centre for Genomics, then used ChatGPT to plan the analytical pipeline and AlphaFold to model the mutated proteins. Pall Thordarson at the UNSW RNA Institute turned Conyngham's analysis into a custom mRNA vaccine. After two injections - administered under ethics approval at the University of Queensland - one tumor shrank roughly 75%. A second tumor did not respond.
To be clear: this is N=1, not peer-reviewed, with no control group. Tumors occasionally shrink on their own. The vaccine's causal role hasn't been established. But the pipeline is what’s fascinating here. A non-biologist used publicly available AI tools to navigate from raw genetic data to a candidate therapy in months, at a cost conventional drug development can't touch.
Why it matters: Expect more stories like this. As AI tools for genomics and protein modeling become accessible to anyone with data skills, the line between patient and researcher is blurring. Also: AI development is accelerating. The regulatory hurdles take months or years. The gap between what's technically possible and what's practically accessible is the story of AI × bio right now.
Did you know? Greg Brockman (OpenAI president) and Demis Hassabis (Google DeepMind CEO) both publicly highlighted the case. Conyngham has posted a Google Form for other dog owners interested in the approach and is working on a second vaccine targeting the tumor that didn't respond.
NEWS
Nature Biotechnology maps the rise of AI agent teams in biomedical research

I asked Nano Banana 2 to return an image of AI agents doing biomedical research. This is the result.
Agentic AI - teams of specialized AI agents that divide research tasks among themselves - now has its field guide. A perspective in Nature Biotechnology from Jason Moore's group at Cedars-Sinai identifies three algorithms driving these systems (large language models, reinforcement learning, and evolutionary algorithms) and seven building-block characteristics: reasoning, verification, reflection, planning, tool use, memory, and communication.
The paper maps where agentic systems have already been deployed across biomedicine - from Virtual Lab designing nanobodies to BioDiscoveryAgent planning gene perturbation experiments to CRISPR-GPT automating gene-editing workflows. But the authors are candid about limitations: on PubMedQA, the best AI model still scores 68% accuracy versus 78% for human respondents. Their recommendation is not full autonomy but what they call “adaptive autonomy” - systems that know when to ask a human.
Why it matters: When a top journal publishes a taxonomy of a field, it usually means the field has grown faster than researchers can track. BAIO has covered five multi-agent biology systems in seven issues. This paper organizes the landscape those systems are building.
Did you know? The paper cites 163 works spanning drug discovery agents, autonomous experiment planning, and clinical reasoning systems - a useful map for anyone tracking the agentic AI x bio space.
NEWS
A YC startup released 10,000 AI-designed DNA sequences. A Stanford genomicist says it's hype.

Origin Bio, a YC-backed startup, released 10,000 AI-designed DNA sequences that act as switches for gene expression - short elements positioned near genes that control how strongly those genes are turned on, and in which cell types. The sequences cover three widely used laboratory cell lines and are explorable at switch.origin.bio. The pitch: if you can design these switches with graded strengths, you move from asking “what happens when we remove a gene?” to “what happens as we dial it from low to high?” - a shift from binary perturbation to continuous dose-response.
The release drew praise from YC CEO Garry Tan, who called AI × bio "barely touched territory."
It also drew pointed criticism from Anshul Kundaje, a Stanford computational genomicist involved in the PyTorch port of DeepMind's AlphaGenome (see below). Kundaje called the release “more hype than a serious endeavor,” noting the three cell lines are trivially distinguishable, that no benchmarks against existing public models were provided, and that fully open-source alternatives already exist. Origin's own blog confirms experimental validation is still ahead.
Why it matters: The clash captures a real tension in AI × bio right now. Capital is flowing fast but domain experts are asking whether the science matches the pitch. For BAIO readers evaluating startups in this space, the gap between computational predictions and experimental validation remains is an important question.
Did you know? Kundaje's lab (Stanford) and Peter Koo's lab (Cold Spring Harbor Laboratory) recently released alphagenome-pytorch, an open PyTorch port of DeepMind's AlphaGenome that researchers can fine-tune on their own datasets. Install with pip install alphagenome-pytorch.
NEWS
The AI system that trained junior scientists through smart glasses just moved into Stanford's hospitals

MedOS - the clinical extension of LabOS, the AI smart glasses system we covered in Issue 5 - gets its public unveiling at NVIDIA GTC on March 18, when Le Cong of Stanford and Mengdi Wang of Princeton present how the system bridges computational reasoning with real-world clinical execution. The presentation comes as MedOS enters early pilot deployments at Stanford, Princeton, and the University of Washington, starting with hospital logistics and laboratory workflows rather than patient-facing care. Le Cong told The Robot Report the team is “just starting to work with clinicians on testing surgical procedures on a mock body” before moving into actual clinical settings.
MedOS combines smart glasses, robotic arms, and multi-agent AI into a clinical co-pilot. A coordinator agent breaks down queries across specialized modules - including EHR, guideline, radiology, and pathology agents - while a vision system trained on over 85,000 minutes of surgical footage interprets procedures in real time. The system is described in a medRxiv preprint that has not been peer-reviewed.
Why it matters: LabOS showed AI could watch scientists work and catch errors. MedOS is the bet that the same approach transfers to the hospital - starting, sensibly, with logistics and labs before patient-facing care.
Did you know? The team has also open-sourced LabClaw - the 206 agentic skills for biomedical research we featured in Issue 7. MedOS builds on the same multi-agent architecture.
UPCOMING
LSX World Congress Europe
📅 LSX World Congress Europe runs March 25-26 in Lisbon, back-to-back with BIO-Europe Spring. The Pharmatech Leaders track focuses on AI in drug discovery and clinical workflows. Insilico Medicine's Petrina Kamya and Jue Wang both speak on March 26 The topics? ”Pushing the boundaries: how far can AI technologies take us in healthcare?” and "Silicon meets science: AI's make-or-break moment in biotech"
THE EDGE
LLMsFold is a drug design pipeline that runs entirely on a laptop. It pairs a large language model with Boltz-2 - an open-source model from MIT that predicts how strongly a drug molecule binds to its target protein - to generate and evaluate new drug candidates without expensive hardware. Feed it a protein target, and the system finds where a drug could attach, designs candidate molecules, scores how well they bind, and iterates to improve them. The authors report the full cycle completes in under six minutes on a MacBook M3. Applied to two cancer-relevant targets, it produced novel candidates that passed all pipeline stages. No wet lab validation yet - the candidate molecules are withheld while the authors pursue patent protection - but the idea that a usable drug design loop now runs on consumer hardware is worth watching. Here’s the preprint.
Until next time,
Peter at BAIO



