Awesome-Graph-Research-ICLR2024

所属分类:论文
开发工具:Others
文件大小:0KB
下载次数:0
上传日期:2024-02-11 03:56:05
上 传 者sh-1993
说明:  它是一个综合资源中心,汇编2024年国际学习代表大会(ICLR)接受的所有图表文件。
(It is a comprehensive resource hub compiling all graph papers accepted at the International Conference on Learning Representations (ICLR) in 2024.)

文件列表:
fig/
LICENSE

# Awesome ICLR 2024 Graph Paper Collection This repository has been established to curate a comprehensive compilation of graph papers that were presented at the esteemed International Conference on Learning Representations (ICLR) in the year 2024. ICLR stands as one of the foremost platforms for showcasing the most cutting-edge progressions in the realms of machine learning and artificial intelligence, drawing in top-tier researchers, practitioners, and experts from across the globe. Graph or Geometric machine learning possesses an indispensable role within the domain of machine learning research, providing invaluable insights, methodologies, and solutions to a diverse array of challenges and problems. Whether it entails pioneering architectures, optimization techniques, theoretical analyses, or empirical investigations, these papers make substantial contributions towards the advancement of the field. In this repository, you'll find a rich assortment of graph papers categorised in different subtopics. ## All Papers: - [Heterophily](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#heterophily) - [Locality-Aware Graph Rewiring in GNNs](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#locality-aware-graph-rewiring-in-gnns) - [Probabilistically Rewired Message-Passing Neural Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#probabilistically-rewired-message-passing-neural-networks) - [Graph Transformer](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#graph-transformer) - [Training Graph Transformers via Curriculum-Enhanced Attention Distillation](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#training-graph-transformers-via-curriculum-enhanced-attention-distillation) - [Transformers vs. Message Passing GNNs: Distinguished in Uniform](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#transformers-vs-message-passing-gnns-distinguished-in-uniform) - [Polynormer: Polynomial-Expressive Graph Transformer in Linear Time](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#polynormer-polynomial-expressive-graph-transformer-in-linear-time) - [Spectral/Polynomial GNN](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#spectralpolynomial-gnn) - [Learning Adaptive Multiresolution Transforms via Meta-Framelet-based Graph Convolutional Network](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#learning-adaptive-multiresolution-transforms-via-meta-framelet-based-graph-convolutional-network) - [PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#polygcl-graph-contrastive-learning-via-learnable-spectral-polynomial-filters) - [Shape-aware Graph Spectral Learning](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#shape-aware-graph-spectral-learning) - [HoloNets: Spectral Convolutions do extend to Directed Graphs](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#holonets-spectral-convolutions-do-extend-to-directed-graphs) - [Text-attributed Graph](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#text-attributed-graph) - [Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text-Attributed Graph Representation Learning](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#harnessing-explanations-llm-to-lm-interpreter-for-enhanced-text-attributed-graph-representation-learning) - [Equivariant GNNs](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#equivariant-gnns) - [Orbit-Equivariant Graph Neural Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#orbit-equivariant-graph-neural-networks) - [Rethinking the Benefits of Steerable Features in 3D Equivariant Graph Neural Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#rethinking-the-benefits-of-steerable-features-in-3d-equivariant-graph-neural-networks) - [Clifford Group Equivariant Simplicial Message Passing Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#clifford-group-equivariant-simplicial-message-passing-networks) - [Graph Neural Networks for Learning Equivariant Representations of Neural Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#graph-neural-networks-for-learning-equivariant-representations-of-neural-networks) - [Theory, Weisfeiler & Leman go](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#theory-weisfeiler--leman-go) - [G^2N^2: Weisfeiler and Lehman go grammatical](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#g2n2--weisfeiler-and-lehman-go-grammatical) - [Beyond Weisfeiler-Lehman: A Quantitative Framework for GNN Expressiveness](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#beyond-weisfeiler-lehman-a-quantitative-framework-for-gnn-expressiveness) - [GDiffusion-based generation](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#gdiffusion-based-generation) - [Graph Generation with $K^2$-trees](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#graph-generation-with--k2-trees) - [Contrastive Learning](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#contrastive-learning) - [PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024#polygcl-graph-contrastive-learning-via-learnable-spectral-polynomial-filters) - [Proteins](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#contrastive-learning) - [Rigid Protein-Protein Docking via Equivariant Elliptic-Paraboloid Interface Prediction](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#rigid-protein-protein-docking-via-equivariant-elliptic-paraboloid-interface-prediction) - [Proteins,Crystals and Material Generation](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#proteinscrystals-and-material-generation) - [Space Group Constrained Crystal Generation](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#space-group-constrained-crystal-generation) - [Scalable Diffusion for Materials Generation](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#scalable-diffusion-for-materials-generation) - [MOFDiff: Coarse-grained Diffusion for Metal-Organic Framework Design](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#mofdiff-coarse-grained-diffusion-for-metal-organic-framework-design) - [Causality](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#causality) - [Causality-Inspired Spatial-Temporal Explanations for Dynamic Graph Neural Networks](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#causality-inspired-spatial-temporal-explanations-for-dynamic-graph-neural-networks) - [Anomaly Detection](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#anomaly-detection) - [Rayleigh Quotient Graph Neural Networks for Graph-level Anomaly Detection](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#rayleigh-quotient-graph-neural-networks-for-graph-level-anomaly-detection) - [Boosting Graph Anomaly Detection with Adaptive Message Passing](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#boosting-graph-anomaly-detection-with-adaptive-message-passing) - [LLM](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#llm) - [Talk like a Graph: Encoding Graphs for Large Language Models](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#talk-like-a-graph-encoding-graphs-for-large-language-models) - [Label-free Node Classification on Graphs with Large Language Models (LLMs)](https://github.com/azminewasi/Awesome-Graph-Research-ICLR2024?tab=readme-ov-file#label-free-node-classification-on-graphs-with-large-language-models-llms) --- --- ## Heterophily ### Locality-Aware Graph Rewiring in GNNs - **Abstract**: Graph Neural Networks (GNNs) are popular models for machine learning on graphs that typically follow the message-passing paradigm, whereby the feature of a node is updated recursively upon aggregating information over its neighbors. While exchanging messages over the input graph endows GNNs with a strong inductive bias, it can also make GNNs susceptible to *over-squashing*, thereby preventing them from capturing long-range interactions in the given graph. To rectify this issue, graph rewiring techniques have been proposed as a means of improving information flow by altering the graph connectivity. In this work, we identify three desiderata for graph-rewiring: **(i) reduce over-squashing, (ii) respect the locality of the graph, and (iii) preserve the sparsity of the graph**. We highlight fundamental trade-offs that occur between spatial and spectral rewiring techniques; while the former often satisfy (i) and (ii) but not (iii), the latter generally satisfy (i) and (iii) at the expense of (ii). We propose a novel rewiring framework that satisfies all of (i)--(iii) through a locality-aware sequence of rewiring operations. We then discuss a specific instance of such rewiring framework and validate its effectiveness on several real-world benchmarks, showing that it either matches or significantly outperforms existing rewiring approaches. - Openewview: https://openreview.net/pdf?id=4Ua4hKiAJX ![](fig/4Ua4hKiAJX.jpg) --- ### Probabilistically Rewired Message-Passing Neural Networks - **Abstract**: Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input. However, they operate on a fixed input graph structure, ignoring potential noise and missing information. Furthermore, their local aggregation mechanism can lead to problems such as over-squashing and limited expressive power in capturing relevant graph structures. Existing solutions to these challenges have primarily relied on heuristic methods, often disregarding the underlying data distribution. Hence, devising principled approaches for learning to infer graph structures relevant to the given prediction task remains an open challenge. In this work, leveraging recent progress in exact and differentiable k-subset sampling, we devise probabilistically rewired MPNNs (PR-MPNNs), which learn to add relevant edges while omitting less beneficial ones. For the first time, our theoretical analysis explores how PR-MPNNs enhance expressive power, and we identify precise conditions under which they outperform purely randomized approaches. Empirically, we demonstrate that our approach effectively mitigates issues like over-squashing and under-reaching. In addition, on established real-world datasets, our method exhibits competitive or superior predictive performance compared to traditional MPNN models and recent graph transformer architectures. - Openewview: https://openreview.net/pdf?id=Tj6Wcx7gVk ![](fig/Tj6Wcx7gVk.jpg) --- --- ## Graph Transformer --- ### Training Graph Transformers via Curriculum-Enhanced Attention Distillation - **Abstract**: Recent studies have shown that Graph Transformers (GTs) can be effective for specific graph-level tasks. However, when it comes to node classification, training GTs remains challenging, especially in semi-supervised settings with a severe scarcity of labeled data. Our paper aims to address this research gap by focusing on semi-supervised node classification. To accomplish this, we develop a curriculum-enhanced attention distillation method that involves utilizing a Local GT teacher and a Global GT student. Additionally, we introduce the concepts of in-class and out-of-class and then propose two improvements, out-of-class entropy and top-k pruning, to facilitate the student's out-of-class exploration under the teacher's in-class guidance. Taking inspiration from human learning, our method involves a curriculum mechanism for distillation that initially provides strict guidance to the student and gradually allows for more out-of-class exploration by a dynamic balance. Extensive experiments show that our method outperforms many state-of-the-art approaches on seven public graph benchmarks, proving its effectiveness. - Openewview: https://openreview.net/pdf?id=j4VMrwgn1M ![](fig/j4VMrwgn1M.jpg) --- ### Transformers vs. Message Passing GNNs: Distinguished in Uniform **TLDR:** Graph Transformers and MPGNNs are incomparable in terms of uniform function approximation while neither is "universal" in this setting. - **Abstract**: Graph Transformers (GTs) such as SAN and GPS have been shown to be universal function approximators. We show that when extending MPGNNs and even 2-layer MLPs with the same positional encodings that GTs use, they also become universal function approximators on graphs. All these results hold in the non-uniform case where a different network may be used for every graph size. In order to show meaningful differences between GTs and MPGNNs we then consider the uniform setting where a single network needs to work for all graph sizes. First, we show that none of the above models is universal in that setting. Then, our main technical result is that there are functions that GTs can express while MPGNNs with virtual nodes cannot and vice versa, making their uniform expressivity provably different. We show this difference empirically on synthetic data and observe that on real-world data global information exchange through graph transformers and conceptually simpler MPGNNs with virtual nodes achieve similar performance gains over message passing on various datasets. - Openewview: https://openreview.net/pdf?id=AcSChDWL6V ![](fig/AcSChDWL6V.jpg) --- ### Polynormer: Polynomial-Expressive Graph Transformer in Linear Time - **TLDR:** A linear graph transformer that performs well on homo/heterophilic graphs by learning high-degree equivariant polynomials. - **Abstract**: Graph transformers (GTs) have emerged as a promising architecture that is theoretically more expressive than message-passing graph neural networks (GNNs). However, typical GT models have at least quadratic complexity and thus cannot scale to large graphs. While there are several linear GTs recently proposed, they still lag behind GNN counterparts on several popular graph datasets, which poses a critical concern on their practical expressivity. To balance the trade-off between expressivity and scalability of GTs, we propose Polynormer, a polynomial-expressive GT model with linear complexity. Polynormer is built upon a novel base model that learns a high-degree polynomial on input features. To enable the base model permutation equivariant, we integrate it with graph topology and node features separately, resulting in local and global equivariant attention models. Consequently, Polynormer adopts a linear local-to-global attention scheme to learn high-degree equivariant polynomials whose coefficients are controlled by attention scores. Polynormer has been evaluated on $13$ homophilic and heterophilic datasets, including large graphs with millions of nodes. Our extensive experiment results show that Polynormer outperforms state-of-the-art GNN and GT baselines on most datasets, even without the use of nonlinear activation functions. - Openewview: https://openreview.net/pdf?id=hmv1LpNfXa ![](fig/hmv1LpNfXa.jpg) --- --- ## Spectral/Polynomial GNN ### Learning Adaptive Multiresolution Transforms via Meta-Framelet-based Graph Convolutional Network - **TLDR:** We propose the MM-FGCN, a novel framework designed to learn adaptive graph multiresolution transforms, resulting in the attainment of state-of-the-art performance in various graph representation learning tasks. - **Abstract**: Graph Neural Networks are popular tools in graph representation learning that capture the graph structural properties. However, most GNNs employ single-resolution graph feature extraction, thereby failing to capture micro-level local patterns (high resolution) and macro-level graph cluster and community patterns (low resolution) simultaneously. Many multiresolution methods have been developed to capture graph patterns at multiple scales, but most of them depend on predefined and handcrafted multiresolution transforms that remain fixed throughout the training process once formulated. Due to variations in graph instances and distributions, fixed handcrafted transforms can not effectively tailor multiresolution representations to each graph instance. To acquire multiresolution representation suited to different graph instances and distributions, we introduce the Multiresolution Meta-Framelet-based Graph Convolutional Network (MM-FGCN), facilitating comprehensive and adaptive multiresolution analysis across diverse graphs. Extensive experiments demonstrate that our MM-FGCN achieves SOTA performance on various graph learning tasks. - Openewview: https://openreview.net/pdf?id=5RielfrDkP ![](fig/5RielfrDkP.jpg) --- ## PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters - **TLDR:** We introduce spectral polynomial filters into graph contrastive learning to model heterophilic graphs. - **Abstract**: Recently, Graph Contrastive Learning (GCL) has achieved significantly superior performance in self-supervised graph representation learning. However, the existing GCL technique has inherent smooth characteristics because of its low-pass GNN encoder and objective based on homophily assumption, which poses a challenge when applying it to heterophilic graphs. In supervised learning tasks, spectral GNNs with polynomial approximation excel in both homophilic and heterophilic settings by adaptively fitting graph filters of arbitrary shapes. Yet, their applications in unsupervised learning are rarely explored. Based on the above analysis, a natural question arises: *Can we incorporate the excellent properties of spectral polynomial filters into graph contrastive learning?* In this paper, we address the question by studying the necessity of introducing high-pass information for heterophily from a spectral perspective. We propose PolyGCL, a GCL pipeline that utilizes polynomial filters to achieve contrastive learning between the low-pass and high-pass views. Specifically, PolyGCL utilizes polynomials with learnable filter functions to generate different spectral views and an objective that incorporates high-pass information through a linear combination. We theoretically prove that PolyGCL outperforms previous GCL paradigms when applied to graphs with varying levels of homophily. We conduct extensive experiments on both synthetic and real-world datasets, which demonstrate the promising performance of PolyGCL on homophilic and heterophilic graphs. - Openewview: https://openreview.net/pdf?id=y21ZO6M86t ![](fig/y21ZO6M86t.jpg) --- --- ## Shape-aware Graph Spectral Learning --- ### HoloNets: Spectral Convolutions do extend to Directed Graphs - **TLDR:** We extend spectral convolutions to directed graphs. Corresponding networks are shown to set SOTA on heterophilic node classification tasks and to be stable to topological perturbations. - **Abstract**: Within the graph learning community, conventional wisdom dictates that spectral convolutional networks may only be deployed on undirected graphs: Only there could the existence of a well-defined graph Fourier transform be guaranteed, so that information may be translated between spatial- and spectral domains. Here we show this traditional reliance on the graph Fourier transform to be superfluous: Making use of certain advanced t ... ...

近期下载者

相关文件


收藏者