WebJan 15, 2024 · Abstract. False data injection attack (FDIA) is the main network attack type threatening power system. FDIA affect the accuracy of data by modifying the measured values of measuring equipment. When the power grid topology has not changed, the data-driven detection method has high detection accuracy. However, the power grid topology … WebN2 - Recently Graph Injection Attack (GIA) emerges as a practical attack scenario on Graph Neural Networks (GNNs), where the adversary can merely inject few malicious nodes instead of modifying existing nodes or edges, i.e., Graph Modification Attack (GMA). Although GIA has achieved promising results, little is known about why it is successful ...
Understanding and Improving Graph Injection Attack by …
WebApr 20, 2024 · Hence, we consider a novel form of node injection poisoning attacks on graph data. We model the key steps of a node injection attack, e.g., establishing links between the injected adversarial nodes and other nodes, choosing the label of an injected node, etc. by a Markov Decision Process. Webreinforcement learning for graph injection attacks that injects mali-cious nodes into the original graph [55, 75]. Also, some works focus on vulnerabilities in other tasks, including node embedding [7], graph matching [70], graph label-flipping [68], graph backdoor attacks [60], and graph out-of-distribution problems [59]. trust lawyer sheridan wy
Scalable attack on graph data by injecting vicious nodes
WebSep 1, 2024 · A more realistic scenario, graph injection attack (GIA), is studied in [14, 13], which injects new vicious nodes instead of modifying the original graph. A greedy … WebNov 19, 2024 · Under these scenarios, exploiting GNN's vulnerabilities and further downgrading its performance become extremely incentive for adversaries. Previous attackers mainly focus on structural perturbations or node injections to the existing graphs, guided by gradients from the surrogate models. WebApr 5, 2024 · Rethinking the Trigger-injecting Position in Graph Backdoor Attack. Jing Xu, Gorka Abad, Stjepan Picek. Published 5 April 2024. Computer Science. Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the … philips actifry