Graphs are ubiquitous in many real-world applications. Graph neural networks (GNNs) have been applied to model a myriad of network-based systems in various domains, such as social networks, citation networks, and knowledge graphs. Despite these breakthroughs, it has been noticed that conventional GNNs fail to make accurate predictions when the labels are scarcely available. One such problem is few-shot node classification. It consists of two disjoint phases: the first (training) phase, classes with substantial labeled nodes (i.e., base classes) are available to learn a GNN model; and the second (testing) phase, the GNN classifies nodes of unseen or novel classes with few labeled nodes. This shortage of labeled training data for the novel classes poses a great challenge to learning effective GNNs.
A prevailing paradigm to tackle this problem is episodic meta-learning. However, there are two disadvantages to meta-learning methods to learning a graph encoder. First, with only a random sampling of classes being selected, the topological knowledge learned with those nodes is piecemeal and insufficient to train the GNN, especially if the selected nodes share little correlation. Second, to boost accuracy, meta-learning methods rely on a large number of samples and thus can take substantial time to converge and learn the GNN. What is needed is an alternative learning approach that addresses the few-shot node classification issue and does not suffer from the disadvantages associated with the meta-learning method.
Researchers at Arizona State University have developed a supervised graph contrastive learning method for training graph neural networks (GNNs). The framework consists of supervised graph contrastive learning with novel mechanisms for data augmentation, subgraph encoding, and multi-scale contrast on graphs. Extensive experiments on three benchmark datasets (CoraFull, Reddit, Ogbn) show that the new framework significantly outperforms state-of-the-art meta-learning-based methods.
Related publication: Supervised Graph Contrastive Learning for Few-shot Node Classification
Potential Applications:
- Graph neural network (GNN) training method for the purposes of:
- Social network analysis
- Product categorization
- Financial fraud detection
- Commercial recommendation (e.g., advertisement recommendation)
- Diagnosis for rare diseases
Benefits and Advantages:
- Proven to be more effective and more efficient in predicting node labels with only a few labeled nodes as reference
- Takes an attributed graph dataset with few node labels in target classes as inputs and predicts the labels for other unlabeled nodes
- Outperforms meta-learning method in terms of accuracy and efficiency