Max Welling Max Welling is is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies at Qualcomm.
Title: Amortization in Graph Neural Networks.
Abstract: Amortization in machine learning is a method to enhance optimization, inference or even learning with learnable components. Using this idea we do not have to solve every problem independently but can instead transfer lessons learned from related problems to new problems. In this talk we will explore this idea in the context of graph neural networks and discuss a number of architectures and learning algorithms explicitly, namely for error-correction decoding, MIMO channel demodulation, solving the traveling salesman problem and discovering causal relations between interacting objects.
Jure Leskovec is a Slovenian computer scientist, entrepreneur and associate professor of Computer Science at Stanford University focusing on networks. He is the chief scientist at Pinterest.
Title: Design Space for Graph Neural Networks.
Abstract: The rapid evolution of Graph Neural Networks (GNNs) has led to a growing number of new architectures as well as novel applications. However, current research focuses on proposing and evaluating specific architectural designs of GNNs, such as GCN, GIN, or GAT, and it is hard to track progress. Additionally, GNN designs are often specialized to a single task, yet few efforts have been made to understand how to quickly find the best GNN design for a novel task or a novel dataset. In this talk I discuss two projects: (1) Open Graph Benchmark, which is a set of benchmark datasets for machine learning with graphs. And, (2) Design space for GNNs where we define and systematically study the architectural design space for GNNs which consists of 315,000 different designs over 32 different predictive tasks. Our approach features three key innovations: (1) A general GNN design space; (2) a GNN task space with a similarity metric, so that for a given novel task/dataset, we can quickly identify/transfer the best performing architecture; (3) an efficient and effective design space evaluation method which allows insights to be distilled from a huge number of model-task combinations. Our key results include: (1) A comprehensive set of guidelines for designing well-performing GNNs; (2) while best GNN designs for different tasks vary significantly, the GNN task space allows for transferring the best designs across different tasks; (3) models discovered using our design space achieve state-of-the-art performance. Overall, our work offers a principled and scalable approach to transition from studying individual GNN designs for specific tasks, to systematically studying the GNN design space and the task space.
Hanghang Tong is currently an associate professor at Department of Computer Science at University of Illinois at Urbana-Champaign.
Title: NetFair: Towards Fair Network Mining.
Abstract: Network (i.e., graph) mining plays a pivotal role in many high-impact application domains. State-of-the-art offers a wealth of sophisticated theories and algorithms, primarily focusing on answering who or what type question. On the other hand, the why or how question of network mining has not been well studied. For example, how can we ensure network mining is fair? How do mining results relate to the input graph topology? Why does the mining algorithm `think’ a transaction looks suspicious? In this talk, I will present our work on addressing individual fairness on graph mining. First, we present a generic definition of individual fairness for graph mining which naturally leads to a quantitative measure of the potential bias in graph mining results. Second, we propose three mutually complementary algorithmic frameworks to mitigate the proposed individual bias measure, namely debiasing the input graph, debiasing the mining model and debiasing the mining results. Each algorithmic framework is formulated from the optimization perspective, using effective and efficient solvers, which are applicable to multiple graph mining tasks. Third, accommodating individual fairness is likely to change the original graph mining results without the fairness consideration. We develop an upper bound to characterize the cost (i.e., the difference between the graph mining results with and without the fairness consideration).
Jingrui He is an associate professor in the School of Information Sciences at the University of Illinois at Urbana-Champaign.
Title: Exploring Rare Categories on Graphs: Local vs. Global.
Abstract: Rare categories refer to the under-represented minority classes in imbalanced data sets. They are prevalent across many high-impact applications in the security domain where the input data can be represented as graphs. In this talk, I will focus on two complementary strategies for exploring such rare categories -- local vs. global. With the local strategy, the goal is to explore a small neighborhood around a seed node from the rare category for identifying additional rare examples; with the global strategy, the goal is to explore the entire graph in order to identify rare category oriented representations. For each strategy, I will introduce recent techniques proposed from iSAIL Lab ( Towards the end, I will also discuss potential future directions on this topic.
Leman Akoglu is the Heinz College Dean's Associate Professor at Carnegie Mellon University's Heinz College of Information Systems and Public Policy.
Title: Tackling Structure and Oversmoothing for Graph-based SSL.
Abstract: SSL is widely used for classification, thanks to its ability to make use of abundant unlabeled data. GNNs provide SOTA performance on graph-based SSL problems. However, performance gradually decays with increasing number of layers, partly due to oversmoothing. In the absence of an input graph, how should one construct a graph from input point-cloud data for graph-based SSL? First I will introduce a new, parallel graph learning framework called PGLearn addressing this question. Next, I will focus on oversmoothing; discuss two different interpretations and present PairNorm, a novel normalization layer based on a detailed analysis of the graph convolution operator, broadly applicable to any GNN.
Xavier Bresson is Associate Professor of Computer Science at the Nanyang Technological University (NTU) in Singapore.
Title: Learning to Solve the Traveling Salesman Problem with Transformers.
Abstract: We introduce a transformer architecture to solve the TSP via reinforcement learning.
Shuiwang Ji is currently an Associate Professor in the Department of Computer Science & Engineering, Texas A&M University, leading the Data Integration, Visualization, and Exploration (DIVE) Laboratory.
Title: Advanced Graph and Sequence Neural Networks for Molecular Property Prediction and Drug Discovery.
Abstract: Properties of molecules are indicative of their functions and thus are useful in many applications. As a cost-effective alternative to experimental approaches, computational methods for predicting molecular properties are gaining increasing momentum and success. However, there lacks a comprehensive collection of tools and methods for this task currently. Here we develop the MoleculeKit, a suite of comprehensive machine learning tools spanning different computational models and molecular representations for molecular property prediction and drug discovery. Specifically, MoleculeKit represents molecules as both graphs and sequences. Built on these representations, MoleculeKit includes both deep learning and traditional machine learning methods for graph and sequence data. Noticeably, we propose and develop novel deep models for learning from molecular graphs and sequences. Therefore, MoleculeKit not only serves as a comprehensive tool, but also contributes towards developing novel and advanced graph and sequence learning methodologies. Results on both online and offline antibiotics discovery and molecular property prediction tasks show that MoleculeKit achieves consistent improvements over prior methods.
Jian Tang is currently an assistant professor at Mila-Quebec AI Institute and HEC Montreal.
Title: Learning Symbolic Logic Rules for Reasoning on Knowledge Graphs.
Abstract: In this talk, I am going to introduce our latest progress on learning logic rules for reasoning on knowledge graphs. Logic rules provide interpretable explanations when used for prediction as well as being able to generalize to other tasks, and hence are critical to learn. Existing methods either suffer from the problem of searching in a large search space (e.g., neural logic programming) or ineffective optimization due to sparse rewards (e.g., techniques based on reinforcement learning). To address these limitations, this paper proposes a probabilistic model called RNNLogic. RNNLogic treats logic rules as a latent variable, and simultaneously trains a rule generator as well as a reasoning predictor with logic rules. We develop an EM-based algorithm for optimization. In each iteration, the reasoning predictor is first updated to explore some generated logic rules for reasoning. Then in the E-step, we select a set of high-quality rules from all generated rules with both the rule generator and reasoning predictor via posterior inference; and in the M-step, the rule generator is updated with the rules selected in the E-step. Experiments on four datasets prove the effectiveness of RNNLogic.