Affordable Access

Access to the full text

Pre-Train and Learn: Preserving Global Information for Graph Neural Networks

Authors
  • Zhu, Dan-Hao1, 2
  • Dai, Xin-Yu2
  • Chen, Jia-Jun2
  • 1 Library, Jiangsu Police Institute, Nanjing, 210031, China , Nanjing (China)
  • 2 Nanjing University, Nanjing, 210093, China , Nanjing (China)
Type
Published Article
Journal
Journal of Computer Science and Technology
Publisher
Springer-Verlag
Publication Date
Nov 30, 2021
Volume
36
Issue
6
Pages
1420–1430
Identifiers
DOI: 10.1007/s11390-020-0142-x
Source
Springer Nature
Keywords
Disciplines
  • Regular Paper
License
Yellow

Abstract

Graph neural networks (GNNs) have shown great power in learning on graphs. However, it is still a challenge for GNNs to model information faraway from the source node. The ability to preserve global information can enhance graph representation and hence improve classification precision. In the paper, we propose a new learning framework named G-GNN (Global information for GNN) to address the challenge. First, the global structure and global attribute features of each node are obtained via unsupervised pre-training, and those global features preserve the global information associated with the node. Then, using the pre-trained global features and the raw attributes of the graph, a set of parallel kernel GNNs is used to learn different aspects from these heterogeneous features. Any general GNN can be used as a kernal and easily obtain the ability of preserving global information, without having to alter their own algorithms. Extensive experiments have shown that state-of-the-art models, e.g., GCN, GAT, Graphsage and APPNP, can achieve improvement with G-GNN on three standard evaluation datasets. Specially, we establish new benchmark precision records on Cora (84.31%) and Pubmed (80.95%) when learning on attributed graphs.

Report this publication

Statistics

Seen <100 times