Time (PST)

Morning Sessions

Speakers/Authors

8:00-8:15 Morning Session: Opening Remarks
Jian Pei

Lingfei Wu
8:15-8:45 Keynote talk 1: Amortization in Graph Neural Networks.

Abstract: Amortization in machine learning is a method to enhance optimization, inference or even learning with learnable components. Using this idea we do not have to solve every problem independently but can instead transfer lessons learned from related problems to new problems. In this talk we will explore this idea in the context of graph neural networks and discuss a number of architectures and learning algorithms explicitly, namely for error-correction decoding, MIMO channel demodulation, solving the traveling salesman problem and discovering causal relations between interacting objects.


Max Welling
8:45-9:15 Keynote talk 2 : Learning to Solve the Traveling Salesman Problem with Transformers.

Abstract: We introduce a transformer architecture to solve the TSP via reinforcement learning.


Xavier Bresson
9:15-9:45

Two Contributed Talks + Live QA

Talk 1: Probabilistic Dual Network Architecture Search on Graph [Video]

Yiren zhao, Duo Wang, Xitong Gao, Robert Mullins, Pietro Liò, Mateja Jamnik

Talk 2: Multi-view Graph Contrastive Representation Learning for Drug-Drug Interaction Prediction [Video]

Yingheng Wang, Yaosen Min, Xin Chen, Ji Wu

9:45-10:15 Keynote talk 3 (Live) : Tackling Structure and Oversmoothing for Graph-based SSL.

Abstract: SSL is widely used for classification, thanks to its ability to make use of abundant unlabeled data. GNNs provide SOTA performance on graph-based SSL problems. However, performance gradually decays with increasing number of layers, partly due to oversmoothing. In the absence of an input graph, how should one construct a graph from input point-cloud data for graph-based SSL? First I will introduce a new, parallel graph learning framework called PGLearn addressing this question. Next, I will focus on oversmoothing; discuss two different interpretations and present PairNorm, a novel normalization layer based on a detailed analysis of the graph convolution operator, broadly applicable to any GNN.


Leman Akoglu
10:15-10:30 Coffee Break/Social Networking
10:30-11:00 Keynote talk 4 : Advanced Graph and Sequence Neural Networks for Molecular Property Prediction and Drug Discovery.

Abstract: Properties of molecules are indicative of their functions and thus are useful in many applications. As a cost-effective alternative to experimental approaches, computational methods for predicting molecular properties are gaining increasing momentum and success. However, there lacks a comprehensive collection of tools and methods for this task currently. Here we develop the MoleculeKit, a suite of comprehensive machine learning tools spanning different computational models and molecular representations for molecular property prediction and drug discovery. Specifically, MoleculeKit represents molecules as both graphs and sequences. Built on these representations, MoleculeKit includes both deep learning and traditional machine learning methods for graph and sequence data. Noticeably, we propose and develop novel deep models for learning from molecular graphs and sequences. Therefore, MoleculeKit not only serves as a comprehensive tool, but also contributes towards developing novel and advanced graph and sequence learning methodologies. Results on both online and offline antibiotics discovery and molecular property prediction tasks show that MoleculeKit achieves consistent improvements over prior methods.


Shuiwang Ji
11:00-11:30 Keynote talk 5 : Exploring Rare Categories on Graphs: Local vs. Global.

Abstract: Rare categories refer to the under-represented minority classes in imbalanced data sets. They are prevalent across many high-impact applications in the security domain where the input data can be represented as graphs. In this talk, I will focus on two complementary strategies for exploring such rare categories -- local vs. global. With the local strategy, the goal is to explore a small neighborhood around a seed node from the rare category for identifying additional rare examples; with the global strategy, the goal is to explore the entire graph in order to identify rare category oriented representations. For each strategy, I will introduce recent techniques proposed from iSAIL Lab (https://isail-laboratory.github.io). Towards the end, I will also discuss potential future directions on this topic.


Jingrui He
11:30-12:30 Poster Session (Spotlight Talks + Live QA)

Time (PST)

Afternoon Sessions

Speakers/Authors

13:00-13:15 Afternoon Session: Opening Remarks
Yinglong Xia

Jiliang Tang
13:15-14:00 Keynote talk 6 : Design Space for Graph Neural Networks.

Abstract: The rapid evolution of Graph Neural Networks (GNNs) has led to a growing number of new architectures as well as novel applications. However, current research focuses on proposing and evaluating specific architectural designs of GNNs, such as GCN, GIN, or GAT, and it is hard to track progress. Additionally, GNN designs are often specialized to a single task, yet few efforts have been made to understand how to quickly find the best GNN design for a novel task or a novel dataset. In this talk I discuss two projects: (1) Open Graph Benchmark, which is a set of benchmark datasets for machine learning with graphs. And, (2) Design space for GNNs where we define and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks. Our approach features three key innovations: (1) A general GNN design space; (2) a GNN task space with a similarity metric, so that for a given novel task/dataset, we can quickly identify/transfer the best performing architecture; (3) an efficient and effective design space evaluation method which allows insights to be distilled from a huge number of model-task combinations. Our key results include: (1) A comprehensive set of guidelines for designing well-performing GNNs; (2) while best GNN designs for different tasks vary significantly, the GNN task space allows for transferring the best designs across different tasks; (3) models discovered using our design space achieve state-of-the-art performance. Overall, our work offers a principled and scalable approach to transition from studying individual GNN designs for specific tasks, to systematically studying the GNN design space and the task space.


Jure Leskovec
14:00-14:30 Keynote talk 7 : NetFair: Towards Fair Network Mining.

Abstract: Network (i.e., graph) mining plays a pivotal role in many high-impact application domains. State-of-the-art offers a wealth of sophisticated theories and algorithms, primarily focusing on answering who or what type question. On the other hand, the why or how question of network mining has not been well studied. For example, how can we ensure network mining is fair? How do mining results relate to the input graph topology? Why does the mining algorithm `think’ a transaction looks suspicious? In this talk, I will present our work on addressing individual fairness on graph mining. First, we present a generic definition of individual fairness for graph mining which naturally leads to a quantitative measure of the potential bias in graph mining results. Second, we propose three mutually complementary algorithmic frameworks to mitigate the proposed individual bias measure, namely debiasing the input graph, debiasing the mining model and debiasing the mining results. Each algorithmic framework is formulated from the optimization perspective, using effective and efficient solvers, which are applicable to multiple graph mining tasks. Third, accommodating individual fairness is likely to change the original graph mining results without the fairness consideration. We develop an upper bound to characterize the cost (i.e., the difference between the graph mining results with and without the fairness consideration).


Hanghang Tong
14:30-15:00

Two Contributed Talks + Live QA

Talk 1: Solving Cold Start Problem in Semi-Supervised Graph Learning [Video]

Il-Jae Kwon, Kyoung-Woon On, Dong-Geon Lee, Byoung-Tak Zhang

Talk 2: Graph Sparsification via Meta-Learning [Video]

Guihong Wan, Harsha Kokel

15:00-15:15 Coffee Break/Social Networking
15:15-15:45 Keynote talk 8 : Learning Symbolic Logic Rules for Reasoning on Knowledge Graphs.

Abstract: In this talk, I am going to introduce our latest progress on learning logic rules for reasoning on knowledge graphs. Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks, and hence are critical to learn. Existing methods either suffer from the problem of searching in a large search space (e.g., neural logic programming) or ineffective optimization due to sparse rewards (e.g., techniques based on reinforcement learning). To address these limitations, this paper proposes a probabilistic model called RNNLogic. RNNLogic treats logic rules as a latent variable, and simultaneously trains a rule generator as well as a reasoning predictor with logic rules. We develop an EM-based algorithm for optimization. In each iteration, the reasoning predictor is first updated to explore some generated logic rules for reasoning. Then in the E-step, we select a set of high-quality rules from all generated rules with both the rule generator and reasoning predictor via posterior inference; and in the M-step, the rule generator is updated with the rules selected in the E-step. Experiments on four datasets prove the effectiveness of RNNLogic.


Jian Tang
15:45-16:00 Best Paper Award Ceremony + Final Remarks
16:00-17:00 Poster Session (Spotlight Talks + Live QA)