Showing 23 papers for 2026-04-13
We propose GNN-as-Judge, a framework that uses GNN feedback to judge and refine LLM-generated pseudo-labels for graph learning in low-label regimes. By selecting reliable pseudo-labels with GNN guidance, the approach aims to improve LLM-based predictions on text-attributed graphs where labeled data are scarce. Empirical results show gains over baselines on TAG tasks, validating the combination of LLMs with graph feedback.
This paper analyzes the use of causal inference in graph representation learning and demonstrates that aggregating diverse graph elements into a single causal variable can violate causal assumptions. They prove that such aggregation harms causal validity and argue for more principled modeling of causal structure in graphs, outlining implications for future methods.
We introduce Neighbourhood Transformer (NT) with switchable attention to be monophily-aware, addressing the limitations of homophily-focused GNNs. By adapting attention patterns to various local structures, NT improves performance on graphs exhibiting heterophily and monophily, enabling more robust message passing.
Beyond isolated clients, this work embeds global graph structure into event-sequence models via three strategies for contrastive self-supervised learning: (1) enriching event embeddings with graph context; (2) aligning client representations to a graph-informed global anchor; and (3) incorporating graph-aware augmentation/objectives. The goal is to improve downstream attribute prediction tasks.
EquiformerV3 advances SE(3)-equivariant graph attention transformers by targeting efficiency, expressivity, and generality. Building on EquiformerV2, it presents software optimizations and architectural improvements to scale 3D atomistic modeling, delivering faster, more accurate, and more generalizable results.
We study DAG topology for energy-aware cloud scheduling with a GNN-based deep reinforcement learning (DRL) scheduler. The approach minimizes workflow completion time and energy usage, and the work also identifies specific out-of-distribution (OOD) conditions where GNN-based DRL schedulers fail, offering principled explanations and potential remedies.
DiffHLS presents a differential learning framework for high-level synthesis QoR prediction. It learns from kernel-baseline and pragma-inserted design variants, using dedicated GNN branches for kernel and design graphs and a delta pathway augmented with code embeddings from LLMs. This yields more accurate QoR predictions with limited data.
R2G introduces a multi-view circuit graph benchmark suite from RTL to GDSII, standardizing five stage-aware views for 30 IP cores with up to one million nodes/edges. It addresses inconsistent circuit representations and provides controlled evaluation protocols to advance GNN-based physical design research.
Hypergraph Neural Networks (HGN) are proposed to accelerate Minimal Unsatisfiable Subset (MUS) enumeration. The method is domain-agnostic and leverages higher-order relations captured by hypergraphs to prune the search space, reducing computation time across CSPs even when satisfiability checks are expensive.
FB-GNN-MBE integrates a fragment-based GNN into the many-body expansion (MBE) framework to predict potential energy surfaces (PES) for large chemically hierarchical systems. It enables transferable, data-adaptive learning to reproduce first-principles PES with improved scalability.
Inferring Latent Temporal Sparse Coordination Graph for Multi-Agent RL proposes learning latent, temporally sparse coordination graphs to represent agent interactions. This reduces computation while preserving important coordination signals, improving multi-agent cooperation.
FIT-GNN tackles inference-time scalability by using graph coarsening to reduce computation. It introduces two methods—Extra Nodes and Coarsening—to accelerate GNN inference with controlled accuracy loss, enabling faster deployment.
Graph Defense Diffusion Model learns a diffusion-based purification process to defend GNNs against multiple adversarial attacks. It models graph purification as a diffusion process for robust graph data and improved resilience.
Mamba-Based Graph Convolutional Networks (MbaGCN) tackle over-smoothing by introducing selective state-space messaging inspired by the Mamba paradigm. The approach distinguishes the importance of information from different neighborhoods, enabling deeper, more expressive GNNs.
Dual Mamba extends the idea to node-specific representation learning, modeling progressive, node-specific evolution across layers with selective state-space modeling and incorporating global information to further mitigate over-smoothing.
Bandwidth-constrained Variational Message Encoding studies what information to transmit under hard bandwidth limits in cooperative MARL. It shows naive dimensionality reduction degrades coordination and proposes a variational encoding scheme to selectively preserve informative messages.
BLEG proposes using LLMs as brain-network enhancers for fMRI-based GNN analyses. By injecting LLM-derived representations or priors, it mitigates feature sparsity and domain knowledge gaps, boosting performance on brain-network tasks.
Bayesian Social Deduction with Graph-Informed Language Models introduces a hybrid framework that externalizes belief into a graph-based Bayesian model to improve social deduction reasoning in real time, combining LLM reasoning with probabilistic graph inference.
Bayesian Ego-graph Inference for Networked MARL develops a stochastic ego-graph policy that adapts to dynamic, local neighborhoods in networked MARL. It enables decentralized learning under changing graphs while maintaining robust coordination.
From Business Events to Auditable Decisions: Ontology-Governed Graph Simulation for Enterprise AI introduces LOM-action, an event-driven ontology simulation that deterministically mutates enterprise graphs in a sandbox to produce auditable, grounded decisions with traceable rationale.
This work shows that large language models underperform graph-based parsers for supervised relation extraction when the underlying linguistic graph is highly complex. The authors compare four LLMs against a graph-based parser on six relation extraction datasets to demonstrate the gap in performance on complex graphs. The study highlights limitations of LLMs for extracting structured relations in graph-rich inputs.
SatQNet investigates entanglement routing in satellite-assisted quantum networks, where satellite motion and stochastic link generation create a dynamic topology. It employs directed line graph neural networks to learn routing policies that adapt to changing links without relying on global topology awareness. The approach aims to improve long-distance entanglement distribution by leveraging graph-based learning suited to directed line graphs.
HyperMem introduces a hypergraph-based hierarchical memory for long-term conversations to capture high-order associations beyond pairwise relations. Unlike retrieval-augmented generation and traditional graph memories, HyperMem explicitly models joint dependencies among multiple elements, enabling more coherent and persistent memory across extended dialogues. This approach improves coherence, persistence, and personalization in long-running conversations.