Time

Session

Speaker(s)

8:30-8:45 Opening Remarks
8:45-9:30 MLG Keynote Talk 1: Towards Automatic Construction of Text-Rich Information Networks from Text [slides]
Abstract: Graphs and texts are both ubiquitous in today's information world. However, it is still an open problem on how to automatically construct text-rich information networks from massive, dynamic, and unstructured massive texts, without human annotation or supervision. In the past years, our group has been studying how to develop effective methods for automatic mining of hidden structures and knowledge from text, and such hidden structures include entities, relations, events, and knowledge graph structures. Equipped with pretrained language models and machine learning methods, as well as human-provided ontological structures, it is promising to transform unstructured text data into structured knowledge. In this talk, we will provide an overview on a set of weakly supervised machine learning methods developed recently for such an exploration, including joint spherical text embedding, discriminative topic mining, named entity recognition, relation extraction, event discovery, text classification, and taxonomy-guided text analysis. We show that weakly supervised approach could be promising at transforming massive text data into structured knowledge graphs.

Jiawei Han
9:30-10:00 Break and Poster Setup
10:00-10:45 DLG Keynote Talk 1: Scaling up Graph Neural Networks at Snap[slides]
Abstract: Graph Neural Networks (GNNs) are an increasingly popular tool for graph machine learning and have shown great results on a wide variety of node, link and graph-level tasks. Yet, they are less popular for practical deployments in industry settings owing to their unique scalability challenges. Large graphs are expensive to store and train GNN models over, with computational overheads worsening dramatically in applications which require neural architecture search. Moreover, trained GNN models are expensive to deploy in real-time-inference settings, where the complex multi-hop data-dependency characteristic of GNNs manifests as latency overhead due to fetching features and graph topology required to make inferences. In this talk, I will discuss recent advances in efficient training and inference for GNNs aimed at circumventing these challenges from our group's recent works published at ICLR'22.

Neil Shah
10:45-11:00 MLG Oral Presentation: How to Quantify Polarization in Models of Opinion Dynamics Christopher Musco, Indu Ramesh, Johan Ugander and R. Teal Witter
11:00-11:15 DLG Oral Presentation: Partition-Based Active Learning for Graph Neural Networks Jiaqi Ma, Ziqiao Ma, Joyce Chai and Qiaozhu Mei
11:15-12:00 Panel Topic: Learning and Reasoning on Knowledge Graphs: Graph Neural Networks vs Foundation Models
12:00-13:30 Lunch Break
13:30-14:15 DLG Keynote Talk 2: Combining Representation Learning and Logical Rule Reasoning for Knowledge Graph Inference [slides]
Abstract: Knowledge graph inference has been studied extensively due to its wide applications. It has been addressed by two lines of research, i.e., the more traditional logical rule reasoning and the more recent knowledge graph embedding (KGE). In this talk, we will introduce two recent developments in our group to combine these two worlds. First, we propose to leverage logical rules to bring in high-order dependency among entities and relations for KGE. By limiting the logical rules to be the definite Horn clauses, we are able to fully exploit the knowledge in logical rules and enable the mutual enhancement of logical rule-based reasoning and KGE in an extremely efficient way. Second, we propose to handle logical queries by representing fuzzy sets as specially designed vectors and retrieving answers via dense vector computation. In particular, we provide embedding-based logical operators that strictly follow the axioms required in fuzzy logic, which can be trained by self-supervised knowledge completion tasks. With additional query-answer pairs, the performance can be further enhanced. With these evidence, we believe combining logic with representation learning provides a promising direction for knowledge reasoning.

Yizhou Sun
14:15-14:30 MLG Oral Presentation: mvn2vec: Preservation and Collaboration in Multi-View Network Embedding Yu Shi, Fangqiu Han, Xinwei He, Xinran He, Carl Yang, Luo Jie and Jiawei Han
14:30-14:45 DLG Oral Presentation: Robust Synthetic GNN Benchmarks with GraphWorld John Palowitch, Anton Tsitsulin, Brandon Mayer and Bryan Perozzi
14:45-15:30 Break and Poster Session
15:30-16:15 MLG Keynote Talk 2: Social reinforcement learning for optimizing network-level goals in multi-agent systems
Abstract: Recommendation systems are traditionally optimized for individual engagement signals such as clicks, reactions and reads. However, enterprise and consumer communication software also help to shape the larger community structure by increasing the spread of diverse knowledge and aid formation/maintenance of ties across groups. Despite this observation, current enterprise recommendation models cannot optimize for organizational goals beyond individual engagement. In this talk, I will describe a framework for social reinforcement learning that can be used to optimize network-level rewards in these multi-agent systems. We first consider the application of mitigating the impact of fake news in social networks, by incentivizing the spread of true news. Then I will discuss how we are applying these ideas to optimize corporate news feeds for team productivity and organizational health.
16:15-16:45 Final Remarks