Layer-wise relevance propagation of InteractionNet explains protein-ligand interactions at the atom level.
Ontology highlight
ABSTRACT: Development of deep-learning models for intermolecular noncovalent (NC) interactions between proteins and ligands has great potential in the chemical and pharmaceutical tasks, including structure-activity relationship and drug design. It still remains an open question how to convert the three-dimensional, structural information of a protein-ligand complex into a graph representation in the graph neural networks (GNNs). It is also difficult to know whether a trained GNN model learns the NC interactions properly. Herein, we propose a GNN architecture that learns two distinct graphs-one for the intramolecular covalent bonds in a protein and a ligand, and the other for the intermolecular NC interactions between the protein and the ligand-separately by the corresponding covalent and NC convolutional layers. The graph separation has some advantages, such as independent evaluation on the contribution of each convolutional step to the prediction of dissociation constants, and facile analysis of graph-building strategies for the NC interactions. In addition to its prediction performance that is comparable to that of a state-of-the art model, the analysis with an explainability strategy of layer-wise relevance propagation shows that our model successfully predicts the important characteristics of the NC interactions, especially in the aspect of hydrogen bonding, in the chemical interpretation of protein-ligand binding.
SUBMITTER: Cho H
PROVIDER: S-EPMC7713352 | biostudies-literature | 2020 Dec
REPOSITORIES: biostudies-literature
ACCESS DATA