Browse Articles

Filter By:

  • Despite the existence of various pretrained language models for nucleotide sequence analysis, achieving good performance on a broad range of downstream tasks using a single model is challenging. Wang and colleagues develop a pretrained language model specifically optimized for RNA sequence analysis and show that it can outperform state-of-the-art methods in a diverse set of downstream tasks.

    • Ning Wang
    • Jiang Bian
    • Haoyi Xiong
    ArticleOpen Access
  • Large language models can be queried to perform chain-of-thought reasoning on text descriptions of data or computational tools, which can enable flexible and autonomous workflows. Bran et al. developed ChemCrow, a GPT-4-based agent that has access to computational chemistry tools and a robotic chemistry platform, which can autonomously solve tasks for designing or synthesizing chemicals such as drugs or materials.

    • Andres M. Bran
    • Sam Cox
    • Philippe Schwaller
    ArticleOpen Access
  • Methods for predicting molecular structure predictions have so far focused on only the most probable conformation, but molecular structures are dynamic and can change when performing their biological functions, for example. Zheng et al. use a graph transformer approach to learn the equilibrium distribution of molecular systems and show that this can be helpful for a number of downstream tasks, including protein structure prediction, ligand docking and molecular design.

    • Shuxin Zheng
    • Jiyan He
    • Tie-Yan Liu
    ArticleOpen Access
  • The central assumption in machine learning that data are independent and identically distributed does not hold in many reinforcement learning settings, as experiences of reinforcement learning agents are sequential and intrinsically correlated in time. Berrueta and colleagues use the mathematical theory of ergodic processes to develop a reinforcement framework that can decorrelate agent experiences and is capable of learning in single-shot deployments.

    • Thomas A. Berrueta
    • Allison Pinosky
    • Todd D. Murphey
    Article
  • Research papers can make a long-lasting impact when the code and software tools supporting the findings are made readily available and can be reused and built on. Our reusability reports explore and highlight examples of good code sharing practices.

    Editorial
  • Tailoring the alignment of large language models (LLMs) to individuals is a new frontier in generative AI, but unbounded personalization can bring potential harm, such as large-scale profiling, privacy infringement and bias reinforcement. Kirk et al. develop a taxonomy for risks and benefits of personalized LLMs and discuss the need for normative decisions on what are acceptable bounds of personalization.

    • Hannah Rose Kirk
    • Bertie Vidgen
    • Scott A. Hale
    Perspective
  • Speech technology offers many applications to enhance employee productivity and efficiency. Yet new dangers arise for marginalized groups, potentially jeopardizing organizational efforts to promote workplace diversity. Our analysis delves into three critical risks of speech technology and offers guidance for mitigating these risks responsibly.

    • Mike Horia Mihail Teodorescu
    • Mingang K. Geiger
    • Lily Morse
    Comment
  • A classic question in cognitive science is whether learning requires innate, domain-specific inductive biases to solve visual tasks. A recent study trained machine-learning systems on the first-person visual experiences of children to show that visual knowledge can be learned in the absence of innate inductive biases about objects or space.

    • Justin N. Wood
    News & Views
  • Current limb-driven methods often result in suboptimal prosthetic motions. Kühn and colleagues develop a framework called synergy complement control (SCC) that advances prosthetics by learning ‘cyborg’ limb-driven control, ensuring natural coordination. Validated in diverse trials, SCC offers reliable and intuitive enhancement for limb functionality.

    • Johannes Kühn
    • Tingli Hu
    • Sami Haddadin
    ArticleOpen Access
  • Modelling the statistical and geometrical properties of particle trajectories in turbulent flows is key to many scientific and technological applications. Li and colleagues introduce a data-driven diffusion model that can generate high-Reynolds-number Lagrangian turbulence trajectories with statistical properties consistent with those of the training set and even generalize to rare, intense events unseen during training.

    • T. Li
    • L. Biferale
    • M. Buzzicotti
    ArticleOpen Access
  • Fragment-based molecular design uses chemical motifs and combines them into bio-active compounds. While this approach has grown in capability, molecular linker methods are restricted to linking fragments one by one, which makes the search for effective combinations harder. Igashov and colleagues use a conditional diffusion model to link multiple fragments in a one-shot generative process.

    • Ilia Igashov
    • Hannes Stärk
    • Bruno Correia
    ArticleOpen Access
  • Identifying compounds in tandem mass spectrometry requires extensive databases of known compounds or computational methods to simulate spectra for samples not found in databases. Simulating tandem mass spectra is still challenging, and long-range connections in particular are difficult to model for graph neural networks. Young and colleagues use a graph transformer model to learn patterns of long-distance relations between atoms and molecules.

    • Adamo Young
    • Hannes Röst
    • Bo Wang
    Article
  • The 5′ untranslated region is a critical regulatory region of mRNA, influencing gene expression regulation and translation. Chu, Yu and colleagues develop a language model for analysing untranslated regions of mRNA. The model, pretrained on data from diverse species, enhances the prediction of mRNA translation activities and has implications for new vaccine design.

    • Yanyi Chu
    • Dan Yu
    • Mengdi Wang
    Article
  • Using machine learning methods to model interatomic potentials enables molecular dynamics simulations with ab initio level accuracy at a relatively low computational cost, but requires a large number of labelled training data obtained through expensive ab initio computations. Cui and colleagues propose a geometric learning framework that leverages self-supervised learning pretraining to enhance existing machine learning based interatomic potential models at a negligible additional computational cost.

    • Taoyong Cui
    • Chenyu Tang
    • Wanli Ouyang
    Article
  • The area under the receiver operating characteristic curve (AUROC) of the test set is used throughout machine learning (ML) for assessing a model’s performance. However, when concordance is not the only ambition, this gives only a partial insight into performance, masking distribution shifts of model outputs and model instability.

    • Michael Roberts
    • Alon Hazan
    • Carola-Bibiane Schönlieb
    Comment