Case ID: M19-078P^

Published: 2022-12-27 10:58:50

Last Updated: 1680708074


Inventor(s)

Uday Shanthamallu
Andreas Spanias
Jayaraman Thiagarajan
Huan Song

Technology categories

Computing & Information TechnologyPhysical Science

Technology keywords

IT
Machine Learning
Networks
PS-Computing and Information Technology
Software and Communication


Licensing Contacts

Shen Yan
Director of Intellectual Property - PS
[email protected]

Multi-Layered Graph Embeddings for Machine Learning

The prevalence of relational data in several real-world applications, e.g., social network analysis, recommendation systems, and neurological modeling has led to crucial advances in machine learning techniques for graph-structured data.  This encompasses a wide-range of formulations to mine and gather insights from complex network datasets — node classification, link prediction, community detection, influential node selection, and many others.  Despite the variability in these formulations, a recurring idea, appearing in almost all of these approaches, is to obtain embeddings for nodes in a graph, prior to carrying out the downstream learning task.  

Graph embedding methods transform graphs into an optimal format for a machine learning task.  Graph embedding can transform nodes, edges, and their features into vector space while preserving properties like graph structure and information.  There is a long-standing interest in building multi-layered graph embeddings that can be used in a semi-supervised machine learning setting.

Researchers at Arizona State University and Lawrence Livermore National Laboratory have developed an approach for constructing multi-layered graph embeddings using attention models.  This approach performs feature learning in an end-to-end fashion with a node classification objective and is superior to employing separate stages of network embedding (e.g., DeepWalk) and classifier design.  First, even in datasets that do not have explicit node attributes, using random features is a highly effective choice.  Second, it is shown that attention models provide a powerful framework for modeling inter-layer dependencies, and can easily scale to a large number of layers.  To this end, an architecture can be developed that employs deep attention models for semi-supervised learning.  

The architecture builds layer-specific attention models and subsequently obtains consensus representations through fusion for label prediction.  Using several benchmark multi-layered graph datasets, the approach demonstrates the effectiveness of random features and significantly outperforms state-of-the-art network embedding strategies such as DeepWalk.

Related publication: GrAMME: Semi-Supervised Learning using Multi-layered Graph Attention Models

Issued U.S. Patent No. 11,526,765

Potential Applications:

  • Machine-learning applications, such as semi-supervised learning with multi-layered graph embeddings, employed by:
    • Social networks (e.g., for friend suggestions, entity resolution, etc.)
    • Recommendation generators
    • Endpoint detection and response
    • Network medicine (e.g., for drug discovery)
    • Viral marketers

Benefits and Advantages:

  • Novel attention model architectures for multi-layer graphs in semi-supervised learning problems
  • Architecture uses attention models to parameterize virtual edges in a supra graph
  • Architecture performs layer-wise attention modeling and effectively fuses information from different layers