Unknown

Dataset Information

0

Derivative-free optimization adversarial attacks for graph convolutional networks.


ABSTRACT: In recent years, graph convolutional networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model's classification of the target nodes, or even cause a degradation of the model's overall classification performance. In this paper, we first propose a black-box adversarial attack framework based on derivative-free optimization (DFO) to generate graph adversarial examples without using gradient and apply advanced DFO algorithms conveniently. Second, we implement a direct attack algorithm (DFDA) using the Nevergrad library based on the framework. Additionally, we overcome the problem of large search space by redesigning the perturbation vector using constraint size. Finally, we conducted a series of experiments on different datasets and parameters. The results show that DFDA outperforms Nettack in most cases, and it can achieve an average attack success rate of more than 95% on the Cora dataset when perturbing at most eight edges. This demonstrates that our framework can fully exploit the potential of DFO methods in node classification adversarial attacks.

SUBMITTER: Yang R 

PROVIDER: S-EPMC8409335 | biostudies-literature |

REPOSITORIES: biostudies-literature

Similar Datasets

| S-EPMC9919433 | biostudies-literature
| S-EPMC6935161 | biostudies-literature
| S-EPMC10786821 | biostudies-literature
| S-EPMC7792111 | biostudies-literature
| S-EPMC8182908 | biostudies-literature
| S-EPMC10761094 | biostudies-literature
| S-EPMC8268184 | biostudies-literature
| S-EPMC9045664 | biostudies-literature
| S-EPMC10078111 | biostudies-literature
| S-EPMC10277640 | biostudies-literature