Converting diffusions to flows accelerates sampling and suggests over-conditioning of co-folding models on sequence

Published in Learning Meaningful Representations of Life @ 14th International Conference on Learning Representations, 2026

Deep generative models can predict protein structures from sequence with high accuracy; however, sampling from these models remains computationally burdensome, with current protocols using hundreds of iterations through the trained model to obtain a final predicted structure. To accelerate sampling and improve the interpretability of the prediction trajectories, we convert the stochastic diffusion sampling process into a deterministic flow process. We show that the conversion of pre-trained, diffusion-based structure prediction models to probability-flow ODEs yields equivalent performance on the FoldBench benchmark alongside a 20x sampling speed-up. Furthermore, we demonstrate the effects on prediction diversity and use the intermediate predictions made along the de-noising trajectory to show that deep generative structure prediction methods are strongly conditioned on the sequence and MSA embeddings, appearing to make predictions with weak sensitivity to the noise initialisation. Finally, we discuss the implications of strong sequence conditioning for generative protein structure prediction and protein design, as well as pointing to future experiments that build on our initial results.

Recommended citation: Quast et.al. (2026). "Converting diffusions to flows accelerates sampling and suggests over-conditioning of co-folding models on sequence." Learning Meaningful representations of Life @ 14th International Conference on Learning Representations. 1(1).
Download Paper