← Home

Daily arXiv Papers

Graph Neural Networks · Graph Learning · LLM × Graph

Showing 23 papers for 2026-04-13

GNN-as-Judge: Unleashing the Power of LLMs for Graph Learning with GNN Feedback
GNN Graph Learning LLM × Graph

We propose GNN-as-Judge, a framework that uses GNN feedback to judge and refine LLM-generated pseudo-labels for graph learning in low-label regimes. By selecting reliable pseudo-labels with GNN guidance, the approach aims to improve LLM-based predictions on text-attributed graphs where labeled data are scarce. Empirical results show gains over baselines on TAG tasks, validating the combination of LLMs with graph feedback.

A Closer Look at the Application of Causal Inference in Graph Representation Learning
Graph Learning

This paper analyzes the use of causal inference in graph representation learning and demonstrates that aggregating diverse graph elements into a single causal variable can violate causal assumptions. They prove that such aggregation harms causal validity and argue for more principled modeling of causal structure in graphs, outlining implications for future methods.

Neighbourhood Transformer: Switchable Attention for Monophily-Aware Graph Learning
GNN Graph Learning

We introduce Neighbourhood Transformer (NT) with switchable attention to be monophily-aware, addressing the limitations of homophily-focused GNNs. By adapting attention patterns to various local structures, NT improves performance on graphs exhibiting heterophily and monophily, enabling more robust message passing.

Beyond Isolated Clients: Integrating Graph-Based Embeddings into Event Sequence Models
Graph Learning

Beyond isolated clients, this work embeds global graph structure into event-sequence models via three strategies for contrastive self-supervised learning: (1) enriching event embeddings with graph context; (2) aligning client representations to a graph-informed global anchor; and (3) incorporating graph-aware augmentation/objectives. The goal is to improve downstream attribute prediction tasks.

EquiformerV3: Scaling Efficient, Expressive, and General SE(3)-Equivariant Graph Attention Transformers
GNN Graph Learning

EquiformerV3 advances SE(3)-equivariant graph attention transformers by targeting efficiency, expressivity, and generality. Building on EquiformerV2, it presents software optimizations and architectural improvements to scale 3D atomistic modeling, delivering faster, more accurate, and more generalizable results.

On the Role of DAG topology in Energy-Aware Cloud Scheduling : A GNN-Based Deep Reinforcement Learning Approach
GNN Graph Learning

We study DAG topology for energy-aware cloud scheduling with a GNN-based deep reinforcement learning (DRL) scheduler. The approach minimizes workflow completion time and energy usage, and the work also identifies specific out-of-distribution (OOD) conditions where GNN-based DRL schedulers fail, offering principled explanations and potential remedies.

DiffHLS: Differential Learning for High-Level Synthesis QoR Prediction with GNNs and LLM Code Embeddings
GNN Graph Learning LLM × Graph

DiffHLS presents a differential learning framework for high-level synthesis QoR prediction. It learns from kernel-baseline and pragma-inserted design variants, using dedicated GNN branches for kernel and design graphs and a delta pathway augmented with code embeddings from LLMs. This yields more accurate QoR predictions with limited data.

R2G: A Multi-View Circuit Graph Benchmark Suite from RTL to GDSII
GNN Graph Learning

R2G introduces a multi-view circuit graph benchmark suite from RTL to GDSII, standardizing five stage-aware views for 30 IP cores with up to one million nodes/edges. It addresses inconsistent circuit representations and provides controlled evaluation protocols to advance GNN-based physical design research.

Hypergraph Neural Networks Accelerate MUS Enumeration
GNN Graph Learning

Hypergraph Neural Networks (HGN) are proposed to accelerate Minimal Unsatisfiable Subset (MUS) enumeration. The method is domain-agnostic and leverages higher-order relations captured by hypergraphs to prune the search space, reducing computation time across CSPs even when satisfiability checks are expensive.

Transferable FB-GNN-MBE Framework for Potential Energy Surfaces: Data-Adaptive Transfer Learning in Deep Learned Many-Body Expansion Theory
GNN Graph Learning

FB-GNN-MBE integrates a fragment-based GNN into the many-body expansion (MBE) framework to predict potential energy surfaces (PES) for large chemically hierarchical systems. It enables transferable, data-adaptive learning to reproduce first-principles PES with improved scalability.

Inferring Latent Temporal Sparse Coordination Graph for Multi-Agent Reinforcement Learning
Graph Learning

Inferring Latent Temporal Sparse Coordination Graph for Multi-Agent RL proposes learning latent, temporally sparse coordination graphs to represent agent interactions. This reduces computation while preserving important coordination signals, improving multi-agent cooperation.

FIT-GNN: Faster Inference Time for GNNs that 'FIT' in Memory Using Coarsening
GNN Graph Learning

FIT-GNN tackles inference-time scalability by using graph coarsening to reduce computation. It introduces two methods—Extra Nodes and Coarsening—to accelerate GNN inference with controlled accuracy loss, enabling faster deployment.

Graph Defense Diffusion Model
GNN Graph Learning

Graph Defense Diffusion Model learns a diffusion-based purification process to defend GNNs against multiple adversarial attacks. It models graph purification as a diffusion process for robust graph data and improved resilience.

Mamba-Based Graph Convolutional Networks: Tackling Over-smoothing with Selective State Space
GNN Graph Learning

Mamba-Based Graph Convolutional Networks (MbaGCN) tackle over-smoothing by introducing selective state-space messaging inspired by the Mamba paradigm. The approach distinguishes the importance of information from different neighborhoods, enabling deeper, more expressive GNNs.

Dual Mamba for Node-Specific Representation Learning: Tackling Over-Smoothing with Selective State Space Modeling
GNN Graph Learning

Dual Mamba extends the idea to node-specific representation learning, modeling progressive, node-specific evolution across layers with selective state-space modeling and incorporating global information to further mitigate over-smoothing.

Bandwidth-constrained Variational Message Encoding for Cooperative Multi-agent Reinforcement Learning
Graph Learning

Bandwidth-constrained Variational Message Encoding studies what information to transmit under hard bandwidth limits in cooperative MARL. It shows naive dimensionality reduction degrades coordination and proposes a variational encoding scheme to selectively preserve informative messages.

BLEG: LLM Functions as Powerful fMRI Graph-Enhancer for Brain Network Analysis
Graph Learning LLM × Graph

BLEG proposes using LLMs as brain-network enhancers for fMRI-based GNN analyses. By injecting LLM-derived representations or priors, it mitigates feature sparsity and domain knowledge gaps, boosting performance on brain-network tasks.

Bayesian Social Deduction with Graph-Informed Language Models
LLM × Graph Graph Learning

Bayesian Social Deduction with Graph-Informed Language Models introduces a hybrid framework that externalizes belief into a graph-based Bayesian model to improve social deduction reasoning in real time, combining LLM reasoning with probabilistic graph inference.

Bayesian Ego-graph Inference for Networked Multi-Agent Reinforcement Learning
Graph Learning

Bayesian Ego-graph Inference for Networked MARL develops a stochastic ego-graph policy that adapts to dynamic, local neighborhoods in networked MARL. It enables decentralized learning under changing graphs while maintaining robust coordination.

From Business Events to Auditable Decisions: Ontology-Governed Graph Simulation for Enterprise AI
Graph Learning

From Business Events to Auditable Decisions: Ontology-Governed Graph Simulation for Enterprise AI introduces LOM-action, an event-driven ontology simulation that deterministically mutates enterprise graphs in a sandbox to produce auditable, grounded decisions with traceable rationale.

LLMs Underperform Graph-Based Parsers on Supervised Relation Extraction for Complex Graphs
Graph Learning LLM × Graph

This work shows that large language models underperform graph-based parsers for supervised relation extraction when the underlying linguistic graph is highly complex. The authors compare four LLMs against a graph-based parser on six relation extraction datasets to demonstrate the gap in performance on complex graphs. The study highlights limitations of LLMs for extracting structured relations in graph-rich inputs.

SatQNet: Satellite-assisted Quantum Network Entanglement Routing Using Directed Line Graph Neural Networks
GNN Graph Learning

SatQNet investigates entanglement routing in satellite-assisted quantum networks, where satellite motion and stochastic link generation create a dynamic topology. It employs directed line graph neural networks to learn routing policies that adapt to changing links without relying on global topology awareness. The approach aims to improve long-distance entanglement distribution by leveraging graph-based learning suited to directed line graphs.

HyperMem: Hypergraph Memory for Long-Term Conversations
Graph Learning

HyperMem introduces a hypergraph-based hierarchical memory for long-term conversations to capture high-order associations beyond pairwise relations. Unlike retrieval-augmented generation and traditional graph memories, HyperMem explicitly models joint dependencies among multiple elements, enabling more coherent and persistent memory across extended dialogues. This approach improves coherence, persistence, and personalization in long-running conversations.