Welcome to Deep Learning on Graphs: Method and Applications (DLG-KDD’21)!
Jure Leskovec is a Slovenian computer scientist, entrepreneur and associate professor of Computer Science at Stanford University focusing on networks. He is the chief scientist at Pinterest.
Title: Reasoning in Knowledge Graphs with Beta Embeddings
Jiawei Han is currently a Michael Aiken Chair Professor at Department of Computer Science at University of Illinois at Urbana-Champaign.
Title: Deep Learning on Text-Intensive Graphs: Exploring the Power of Text Embedding.
Petar Velickovic is a Senior Research Scientist at DeepMind. He holds a PhD in Computer Science from the University of Cambridge (Trinity College), obtained under the supervision of Pietro Liò. Petar's research concerns geometric deep learning---devising neural network architectures that respect the invariances and symmetries in data (a topic he's co-written a proto-book about).
Title: Neural Algorithmic Reasoning
Abstract: Algorithms have been fundamental to recent global technological advances and, in particular, they have been the cornerstone of technical advances in one field rapidly being applied to another. It can be argued that algorithms possess fundamentally different qualities to deep learning methods, and this strongly suggests that, were deep learning methods better able to mimic algorithms, generalisation of the sort seen with algorithms would become possible with deep learning—something far out of the reach of current machine learning methods. Furthermore, by representing elements in a continuous space of learnt algorithms, neural networks are able to adapt known algorithms more closely to real-world problems, potentially finding more efficient and pragmatic solutions than those proposed by human computer scientists.
Here I will present neural algorithmic reasoning—the art of building neural networks that are able to execute algorithmic computation—and provide an opinion on its transformative potential for running classical algorithms on inputs previously considered inaccessible to them.
Wei Wang is a Leonard Kleinrock Chair Professor in Computer Science and Computational Medicine at University of California, Los Angeles and the director of the Scalable Analytics Institute (ScAi).
Title: Deep Learning for Graph Similarity Search
Xavier Bresson is Associate Professor of Computer Science at National University of Singapore (NUS).
Title: Keynote talk 3: Graph Neural Networks with Learnable Structral and Positional Representation.
Abstract : Graph neural networks have become the standard toolkit for analyzing and learning from data on graphs. GNNs have been applied to several domains ranging from quantum chemistry, recommender systems to knowledge graphs and natural language processing. A major issue with arbitrary graphs is the absence of canonical positional information of nodes, which decreases the representation power of GNNs to distinguish e.g. isomorphic nodes and other graph symmetries. An approach to tackle this issue is to introduce positional encoding (PE) of nodes, and inject it into the input layer, like in Transformer. Possible graph PE are graph Laplacian eigenvectors, but their sign is not uniquely defined. In this work, we propose to decouple structural and positional representation to make easy for the network to learn these two properties. We show that any GNN can actually be augmented by learnable PE, and improve the performance. We investigate sparse and fully-connected/transformer-like GNNs, and observe the usefulness to learn PE for both classes.
Le Song is an Associate Professor at Georgia Institute of Technology.
Title: Molecule Optimization by Explainable Evolution
Abstract: Optimizing molecules for desired properties is a fundamental yet challenging task in chemistry, material science, and drug discovery. We develop a novel algorithm for optimizing molecular properties via an Expectation Maximization (EM) like explainable evolutionary process. The algorithm is designed to mimic human experts in the process of searching for desirable molecules and alternate between two stages: the first stage on explainable local search which identifies rationales, i.e., critical subgraph patterns accounting for desired molecular properties, and the second stage on molecule completion which explores the larger space of molecules containing good rationales. We test our approach against various baselines on a real-world multi-property optimization task where each method is given the same number of queries to the property oracle. We show that our evolution-by-explanation algorithm is 79% better than the best baseline in terms of a generic metric combining aspects such as success rate, novelty, and diversity. Human expert evaluation on optimized molecules shows that 60% of top molecules obtained from our methods are deemed successful.
Heng Huang is a John A. Jurenko Endowed Professor in Computer Engineering at the University of Pittsburgh.
Title: Utilizing Graph Intrinsic Structures to Enhance Deep Neural Networks for Feature Learning
Abstract: Graph data are ubiquitous in the real world, such as social networks, biological, brain networks. To analyze graph data, a fundamental task is to learn node features to benefit downstream tasks, such as node classification, community detection. Inspired by the powerful feature learning capability of deep neural networks on various tasks, it is important and necessary to explore deep neural networks for feature learning on graphs. Different from the regular image and sequence data, graph data encode the complicated relational information between different nodes, which challenges the classical deep neural networks. To address these challenging issues, we proposed several new deep neural networks to effectively explore the relational information for feature learning on graph data.
First, to preserve the relational information in the hidden layers of deep neural networks, we developed a novel graph convolutional neural network (GCN) based on conditional random fields, which is the first algorithm applying this kind of graphical models to graph neural networks in an unsupervised manner. Second, to address the sparseness issue of the relational information, we proposed a new proximity generative adversarial network which can discover the underlying relational information for learning better node representations. We also designed several graph neural network models for solving the brain network data analysis and integration.