Iambic Rides Blackwell GPUs on Lambda Cloud

At the heart of drug discovery there’s a paradox both frustrating and familiar to anyone steeped in pharmaceutical research: the vast oceans of preclinical data that flood in from assays, structural analyses, and early molecular testing.
All of this contrasts starkly with the scarce trickle of clinical data that ultimately decides the fate of drug candidates.
Navigating this gap, San Diego-based Iambic Therapeutics, has crafted an AI transformer called Enchant. Unlike countless other incremental models, it’s sophisticated multimodal transformer has been engineered to use abundant preclinical insights to make genuinely meaningful predictions about elusive clinical outcomes.
The model’s recent performance is indicative of just how far this shift goes, achieving a Spearman correlation (Spearman R) of approximately 0.74 in predicting human pharmacokinetic half-life, significantly leaping over the previous benchmark of about 0.58. It does so by transferring the dense, granular insights from early-stage lab data into predictions about clinical results, precisely where data is limited and most valuable.
What makes Enchant uniquely effective is its design as a multimodal transformer. It integrates diverse data inputs (molecular graphs, meticulously detailed assay results, physicochemical descriptors) into a single, coherent process. This contrasts with traditional methods, which often segment different data types into siloed analyses, thus losing potential insights hidden in their integration.
Enchant capitalizes on the fact that each piece of additional preclinical data incrementally enhances its predictive capacity, all without overburdening its demands for clinical validation.
But as anyone working in this space knows, developing a transformer model capable of sophisticated multimodal encoding is only half the challenge.
The other half is scaling compute to match the ambition of the architecture.
It’s here that Iambic’s partnership with Lambda and its deployment of NVIDIA’s HGX B200 clusters becomea important.
These GPU nodes are purpose-built for the intense, interlinked demands of large transformer models. Using NVIDIA’s Blackwell GPUs interconnected via NVLink and NVSwitch fabrics, the HGX B200 clusters are calibrated for extreme parallelism and minimal latency, a non-negotiable requirement for the rigors of multimodal transformers.
Lambda’s cloud infrastructure offers more than raw horsepower. It introduces a shift in how compute resources are managed and scaled. The flexibility of one-click cluster deployment on Lambda means Iambic researchers can match resources directly to the iterative demands of model experimentation.
Instead of the traditional infrastructure headaches (server racks, power provisioning, and cooling) compute capability becomes elastic and adaptive,
This is important when you think about the Enchant workload.
The process starts with data ingestion, effortlessly weaving structured assay results, molecular graph representations, and critical physicochemical properties into the model. The transformer then encodes these multimodal data streams, first through pretraining on preclinical datasets, followed by fine-tuning using targeted clinical datasets, which are generally smaller.
This approach yields predictions refined enough to guide multi-parameter optimization, a notoriously difficult task that requires the simultaneous balancing of efficacy, metabolism, pharmacokinetics, and safety considerations. The predictions from Enchant then feed back into the molecular design loop, iteratively to improve drug candidate selections.
For those deeply engaged in the technicalities of pharma at scale, you know these multimodal transformer architectures are becoming almost essential. And the infrastructure required (advanced GPU clusters interconnected with sophisticated low-latency fabrics) has transitioned from luxury to necessity.
It would be nice if we could all build our own clusters but it’s good to know there are plenty of options among GPU cloud players like Lamba, especially with high-end GPUs so scarce.