← Home

Daily arXiv Papers

Graph Neural Networks · Graph Learning · LLM × Graph

Showing 151 papers for ICLR 2026

Discrete Bayesian Sample Inference for Graph Generation
Graph Learning

GraphBSI is a one-shot graph generative model based on Bayesian Sample Inference (BSI). Instead of evolving samples directly, GraphBSI iteratively refines a belief over graphs in the continuous space of distribution parameters, naturally handling discrete structures. This provides a flexible framework for generating discrete graphs such as molecules.

Si-GT: Fast Interconnect Signal Integrity Analysis for Integrated Circuit Design via Graph Transformers
GNN Graph Learning

Si-GT introduces a transformer-based model for fast and accurate signal integrity analysis of IC interconnects. It uses three key designs to scale SI analysis, including a virtual NET-inspired abstraction, efficient graph representation, and Transformers tailored for circuit data, enabling rapid evaluation without full SPICE simulations.

Controllable Logical Hypothesis Generation for Abductive Reasoning in Knowledge Graphs
Knowledge Graph Graph Learning

The paper defines controllable hypothesis generation for abductive reasoning in knowledge graphs to reduce redundancy and irrelevance. It addresses challenges in generating long, complex logical hypotheses and introduces mechanisms to steer the search toward useful, targeted explanations.

Learning Hierarchical and Geometry-Aware Graph Representations for Text-to-CAD
Graph Learning

Text-to-CAD requires long-horizon reasoning to translate instructions into code. The approach uses hierarchical and geometry-aware graph representations to preserve assembly hierarchy and geometric constraints, reducing error propagation compared with flat decoding. This leads to more robust, feasible CAD generation.

HGNet: Scalable Foundation Model for Automated Knowledge Graph Generation from Scientific Literature
Knowledge Graph Graph Learning

HGNet proposes a scalable foundation model for automated knowledge graph generation from scientific literature. It targets long, multi-word entities, cross-domain generalization, and hierarchical/logical constraints inherent in scientific knowledge, improving robustness and adaptability beyond standard LLMs.

A Structured, Tagged, and Localized Visual Question Answering Dataset with Full Sentence Answers and Scene Graphs for Chest X-ray Images
Graph Learning

We introduce MIMIC-Ext-CXR-QBA (CXR-QBA), a large chest X-ray VQA dataset with full-sentence answers, bounding boxes, and scene graphs. It provides 42 million QA pairs with multi-granular, multi-part answers to support targeted, localization-aware VQA in medical imaging.

Query-Aware Flow Diffusion for Graph-Based RAG with Retrieval Guarantees
Graph Learning

Query-Aware Flow Diffusion RAG (QAFD-RAG) is a training-free framework for graph-based retrieval-augmented generation. It dynamically adapts subgraph exploration via diffusion flows according to the query and provides theoretical guarantees on subgraph quality and relevance.

DAMR: Efficient and Adaptive Context-Aware Knowledge Graph Question Answering with LLM-Guided MCTS
Knowledge Graph Graph Learning

DAMR offers efficient, adaptive context-aware KG QA by guiding MCTS with LLMs. It combines retrieval, reasoning, and adaptive path exploration to produce more accurate answers with improved efficiency.

PRISM: Partial-label Relational Inference with Spatial and Spectral Cues
Graph Learning

PRISM tackles partial-label graph learning by inferring the true relational labels from a candidate label set using spatial and spectral cues. The framework unifies relational inference with cues to mitigate overfitting to noisy candidates.

HEIST: A Graph Foundation Model for Spatial Transcriptomics and Proteomics Data
Graph Learning GNN

HEIST introduces a graph foundation model tailored for spatial transcriptomics and proteomics data, integrating spatial coordinates with molecular counts to model cells within their tissue context. It enables more accurate representation of cellular heterogeneity and spatial organization.

GALAX: Graph-Augmented Language Model for Explainable Reinforcement-Guided Subgraph Reasoning in Precision Medicine
GNN Graph Learning LLM × Graph

GALAX combines graph-augmented language modeling with explainable reinforcement-guided subgraph reasoning for precision medicine. By integrating numerical omics, topology, and textual knowledge, it enables mechanistic explanations for medical decisions.

Graph-based Nearest Neighbors with Dynamic Updates via Random Walk-Based Analysis
Graph Learning Graph Theory

Graph-based nearest neighbors with dynamic updates uses random-walk analysis to enable deletions and other updates in a graph-based ANN like HNSW, maintaining retrieval quality while supporting efficient updates.

Efficient Learning on Large Graphs using a Densifying Regularity Lemma
Graph Theory Graph Learning

Efficient learning on large graphs via the Densifying Regularity Lemma introduces Intersecting Block Graph (IBG), a low-rank graph factorization based on intersecting bipartite components. It uses a constructive weak regularity approach to approximate arbitrary graphs efficiently.

GraphUniverse: Enabling Systematic Evaluation of Inductive Generalization
Graph Learning

GraphUniverse enables systematic evaluation of inductive generalization by generating families of graphs with persistent communities. This framework lets researchers study how models generalize to unseen graphs with consistent structural semantics.

FSOD-VFM: Few-Shot Object Detection with Vision Foundation Models and Graph Diffusion
Graph Learning

FSOD-VFM combines vision foundation models with a universal proposal network, SAM2, and DINOv2 features to tackle few-shot object detection. It addresses the overfragmentation of UPN bounding boxes and improves adaptation to novel object categories.

Multi-Domain Transferable Graph Gluing for Building Graph Foundation Models
Graph Learning

Multi-Domain Transferable Graph Gluing presents a differential geometry perspective to merge diverse graph datasets into a unified representation, improving consistency and transferability of graph foundation models across domains.

Adaptive Canonicalization with Application to Invariant Anisotropic Geometric Networks
GNN Graph Learning

Adaptive Canonicalization introduces input- and network-dependent canonicalization to reduce discontinuities in equivariant models. It uses prior-based maximization to obtain a standard form that adapts to the data and the network.

Sheaves Reloaded: A Direction Awakening
GNN Graph Learning

Sheaves Reloaded extends Sheaf Neural Networks with directionality via the Directed Cellular Sheaf, defining a directed sheaf Laplacian to better capture oriented relationships.

Self-Consistency Improves the Trustworthiness of Self-Interpretable GNNs
GNN Graph Learning

Self-Consistency analyzes faithfulness in self-interpretable GNNs and shows that training objectives aligned with self-consistency improve explanation faithfulness. The paper provides empirical validation.

Physics-Inspired All-Pair Interaction Learning for 3D Dynamics Modeling
GNN Graph Learning

Physics-Inspired All-Pair Interaction Learning for 3D Dynamics Modeling introduces PAINET, a physics-inspired all-pair interaction model for 3D dynamics. It captures unobserved interactions beyond explicit structures and delivers improved trajectory predictions with SE(3) equivariance.

: One LLM Token for Explicit Graph Structural Understanding
LLM × Graph

We propose to represent graph structure with a single special token in LLMs, addressing structural hallucination and token inefficiency by avoiding full verbalization or soft-prompt embeddings. The token encodes the entire graph structure (SOF), enabling explicit structural understanding within language models.

Modality-free Graph In-context Alignment
Graph Learning

We introduce Modality-Free Graph In-context Alignment (MF-GIA) to achieve cross-domain alignment for graph foundation models without relying on modality-specific encoders. By aligning graph representations in-context with downstream reasoning, MF-GIA enables pretrained graph encoders to operate across data modalities even when graphs are pre-vectorized or raw data are unavailable.

GTool: Graph Enhanced Tool Planning with Large Language Model
Graph Learning

We show that existing tool planning with LLMs treats tools as isolated components and fails to leverage their dependencies. GTool proposes to model tool dependencies to produce valid, scalable tool plans even when the toolset is large.

Healthcare Insurance Fraud Detection via Continual Fiedler Vector Graph Model
Graph Theory Graph Learning

We propose the Continual Fiedler Vector Graph (ConFVG) model for healthcare insurance fraud detection, designed for limited supervision and rapidly evolving fraud patterns. By leveraging spectral (Fiedler) graph properties and continual learning, ConFVG captures structural anomalies and adapts online to new fraud tactics.

Full-Graph vs. Mini-Batch Training: Comprehensive Analysis from a Batch Size and Fan-Out Size Perspective
GNN Graph Learning

We provide a comprehensive analysis comparing full-graph and mini-batch GNN training from the perspectives of batch size and fan-out. The study characterizes convergence, generalization, and computational efficiency, highlighting when each regime is preferable for different graph workloads.

HSG-12M: A Large-Scale Dataset of Spatial Multigraphs from the Energy Spectra of non-Hermitian Crystals
Graph Learning

We introduce HSG-12M, a large-scale dataset of spatial multigraphs derived from the energy spectra of non-Hermitian crystals. The Hamiltonian spectral graphs encoded in the dataset provide rich fingerprints of electronic behavior and enable AI-assisted exploration of non-Hermitian physics.

Directed Semi-Simplicial Learning with Applications to Brain Activity Decoding
Graph Learning Graph Theory

We introduce Semi-Simplicial Neural Networks (SSNs) to capture directed, higher-order relationships beyond pairwise interactions. By extending topological deep learning to directed semi-simplicial complexes, SSNs enable improved brain activity decoding and other complex systems.

UniTrack: Differentiable Graph Representation Learning for Multi-Object Tracking
Graph Learning

UniTrack provides a plug‑and‑play differentiable graph-based MOT loss that optimizes detection accuracy, identity preservation, and spatiotemporal consistency in an end-to-end fashion. It can be integrated with existing MOT pipelines without modifying architectures.

HYPER: A Foundation Model for Inductive Link Prediction with Knowledge Hypergraphs
Knowledge Graph Graph Learning

HYPER is a foundation model for inductive link prediction on knowledge hypergraphs, capable of handling novel entities and novel relation types not seen during training. It generalizes to any knowledge hypergraph by learning transferable representations.

Paradigm Shift of GNN Explainer from Label Space to Prototypical Representation Space
GNN Graph Learning

We shift GNN explainers from the graph label space to a prototypical representation space, enabling better utilization of structural information during explanation optimization. The paradigm improves fidelity and interpretability of instance-level explanations.

Global-Recent Semantic Reasoning on Dynamic Text-Attributed Graphs with Large Language Models
GNN Graph Learning LLM × Graph

We study dynamic text-attributed graphs (DyTAGs) and propose Global-Recent Semantic Reasoning to capture recent-global temporal semantics. The approach leverages LLMs to process evolving text while addressing efficiency challenges.

Fair Graph Machine Learning under Adversarial Missingness Processes
GNN Graph Learning

We propose Better Fair than Sorry (BFtS), a fair missing data imputation model designed for adversarial missingness processes that can mask fairness. BFtS improves fairness evaluation by robustly imputing sensitive attributes under adversarial conditions.

DHG-Bench: A Comprehensive Benchmark for Deep Hypergraph Learning
Graph Learning

DHG-Bench provides a comprehensive benchmark for deep hypergraph learning, standardizing experimental protocols and enabling multi-dimensional analyses of Hypergraph Neural Networks (HNNs). The benchmark facilitates fair, reproducible comparisons.

Panoptic Pairwise Distortion Graph
Graph Learning

We propose Distortion Graph (DG) as a new task and representation for pairwise image assessment, treating paired images as a region-based graph and encoding distortion type, severity, comparison and quality scores. This extends intra-image scene graphs to inter-image evaluation.

KGOT: Unified Knowledge Graph and Optimal Transport Pseudo-Labeling for Molecule-Protein Interaction Prediction
Knowledge Graph Graph Learning

KGOT unifies knowledge graph context with optimal transport-based pseudo-labeling to improve molecule-protein interaction (MPI) prediction under scarce labels. It leverages broader biological context such as genes and pathways to enhance predictive power.

PoSh: Using Scene Graphs to Guide LLMs-as-a-Judge for Detailed Image Descriptions
Graph Learning LLM × Graph

PoSh introduces a scene-graph-guided metric to judge detailed image descriptions by prompting LLMs with a structured scene-graph rubric. It produces fine-grained scores that localize errors in long descriptions.

On The Expressive Power of GNN Derivatives
GNN Graph Theory

We show that derivatives of GNN outputs with respect to node features can enhance expressivity beyond standard architectures. Theoretical results demonstrate how such derivatives expand the representational power of GNNs.

TopoFormer: Topology Meets Attention for Graph Learning
Graph Learning

TopoFormer encodes graph topology into attention by Topo-Scan, which slices node/edge filtrations into a short sequence of topological tokens processed by a Transformer. This yields scalable, parallelizable graph representations and avoids traditional persistent homology bottlenecks.

FACET: A Fragment-Aware Conformer Ensemble Transformer
Graph Learning GNN

FACET is a fragment-aware conformer ensemble transformer that efficiently fuses features from multiple 3D conformers with 2D molecular graphs via a differentiable graph transformer. It approximates expensive fused representations with an efficient, scalable approach.

MobileKGQA: On-Device KGQA System on Dynamic Mobile Environments
Knowledge Graph

MobileKGQA presents the first on-device KGQA system capable of adapting to evolving databases with minimal resource demands. It addresses resource constraints and data accumulation for on-device knowledge graph question answering.

DAG-Math: Graph-Guided Mathematical Reasoning in LLMs
LLM × Graph Graph Learning

This work reframes chain-of-thought (CoT) as a rule-based stochastic process on directed acyclic graphs, where nodes are intermediate derivation states and edges are rule applications. It introduces a metric called logical closeness that measures how tightly a model's CoT trajectory adheres to the underlying reasoning rules. The framework offers diagnostic insight into LLM mathematical reasoning and can guide prompt design and refinement to improve reliability.

From Embedding to Control: Representations for Stochastic Multi-Object Systems
Graph Learning

We introduce Graph Controllable Embeddings (GCE) to learn stochastic multi-object dynamics for linear control. GCE relies on Hilbert space embeddings to map probability distributions of controlled dynamics into an RKHS, enabling linear control techniques on complex, interacting objects with nonuniform interactions and random topologies. The approach provides a general, scalable framework for accurate modeling and subsequent control.

MolecularIQ: Characterizing Chemical Reasoning Capabilities Through Symbolic Verification on Molecular Graphs
Graph Learning

MolecularIQ proposes a framework to characterize chemical reasoning capabilities of LLMs by performing symbolic verification on molecular graphs. Since molecular properties depend on composition and structure encoded in molecular graphs, the work emphasizes reasoning over structure rather than relying on text-only cues, and introduces evaluation methods that mitigate leakage and bias in chemistry benchmarks.

Beyond Entity Correlations: Disentangling Event Causal Puzzles in Temporal Knowledge Graphs
Knowledge Graph

The paper argues that focusing on entity correlations in Temporal Knowledge Graphs (TKGs) misses heterogeneous causalities embedded in events. It proposes a structural causal model for TKGs and introduces a method, the Heterogeneous Event Causality Disentangler, to separate multiple event-level causal factors and improve event prediction under weak supervision.

Rethinking the Gold Standard: Why Discrete Curvature Fails to Fully Capture Over-squashing in GNNs?
GNN Graph Theory

The work reevaluates discrete curvature as a predictor of over-squashing in Graph Neural Networks, arguing that highly negative curvature is sufficient but not necessary for over-squashing. The authors provide counterexamples that demonstrate the limitation of curvature-based explanations and call for more nuanced measures.

A Brain Graph Foundation Model: Pre-Training and Prompt-Tuning across Broad Atlases and Disorders
Graph Learning

BrainGFM is a brain graph foundation model pre-trained on large-scale fMRI-based graphs across diverse brain atlases and disorders. The pretraining uses graph contrastive learning and graph masked autoencoders to capture shared and unique structural patterns, enabling downstream neuroscience tasks with better generalization.

WATS: Wavelet-Aware Temperature Scaling for Reliable Graph Neural Networks
GNN

WATS introduces Wavelet-Aware Temperature Scaling to calibrate GNN predictions more reliably. Unlike methods that rely only on one-hop statistics, WATS leverages graph wavelet representations to capture fine-grained topological heterogeneity and adjusts confidence via a temperature parameter learned with respect to the graph structure.

MAVEN: A Mesh-Aware Volumetric Encoding Network for Simulating 3D Flexible Deformation
GNN Graph Learning

MAVEN proposes a mesh-aware volumetric encoding for 3D solid deformation simulation. By incorporating higher-dimensional geometric features such as facets and cells in addition to vertices and edges, the method more accurately represents boundaries and volumes, improving prediction of flexible deformations and contact.

On the Sample Complexity of GNNs
GNN Graph Theory

The paper provides a minimax analysis for ReLU-message-passing GNNs in both inductive and transductive settings. They derive how the worst-case generalization error scales with sample size n and input dimension d, showing a sqrt(log d / n) rate for arbitrary graphs, and discuss improvements under spectral–homophily assumptions.

A Scalable Inter-edge Correlation Modeling in CopulaGNN for Link Sign Prediction
GNN Graph Learning

The paper scales modeling of inter-edge correlations for link sign prediction by using Gaussian copulas and a correlation matrix, extending CopulaGNN. To handle computational intractability with naive edge-edge modeling, the authors propose scalable approximations to model latent edge dependencies efficiently.

Training-free Counterfactual Explanation for Temporal Graph Model Inference
Graph Learning

TemGX is a training-free, post-hoc explainer for temporal graph models. It discovers temporal subgraphs and their evolution that drive a TGNN's predictions, and introduces measures of structural and temporal influence to quantify explanations.

Explore-on-Graph: Incentivizing Autonomous Exploration of Large Language Models on Knowledge Graphs with Path-refined Reward Modeling
Knowledge Graph LLM × Graph

Explore-on-Graph incentivizes autonomous LLM exploration on knowledge graphs via path-refined reward modeling. It grounds LLM reasoning in verifiable knowledge sources to reduce hallucinations, and encourages exploration beyond fixed demonstrations by refining rewards along graph paths.

Learning from Algorithm Feedback: One-Shot SAT Solver Guidance with GNNs
GNN

The approach RLAF trains GNN-guided one-shot variable weighting and polarity assignment to SAT solvers, providing a generic mechanism to inject guidance into branching heuristics. In a single forward pass, the GNN assigns weights to all variables, enabling a plug-in replacement or augmentation of existing solvers.

One for Two: A Unified Framework for Imbalanced Graph Classification via Dynamic Balanced Prototype
GNN Graph Learning

UniImb is a unified framework for imbalanced graph classification that handles class and topological imbalance. It uses multi-scale topological features and learnable personalized graph perturbations to augment data, and a dynamic balanced prototype module to learn representative graph prototypes.

One for Two: A Unified Framework for Imbalanced Graph Classification via Dynamic Balanced Prototype
GNN Graph Learning

A second variant of UniImb reinforces the unified treatment of graph-level imbalance with enhanced perturbation strategies and a dynamic prototype mechanism to boost performance on underrepresented graph classes.

Pairwise is Not Enough: Hypergraph Neural Networks for Multi-Agent Pathfinding
GNN Graph Learning

Pairwise is Not Enough argues that MAPF requires modeling higher-order interactions beyond pairwise message passing. The authors propose Hypergraph Neural Networks to capture multi-agent interactions, addressing attention dilution in dense environments and improving planning quality.

Graph homophily booster: Rethinking the role of discrete features on heterophilic graphs
GNN Graph Learning

Graph homophily booster rethinks the role of discrete features on heterophilic graphs, arguing that current GNNs underperform because they neglect core discrete features. The work proposes methods that leverage discrete features to better handle heterophily.

ATEX-CF: Attack-Informed Counterfactual Explanations for Graph Neural Networks
GNN

ATEX-CF unifies adversarial attacks with counterfactual explanations for GNNs, generating minimal perturbations that flip predictions while aligning with counterfactual changes.

LRIM: a Physics-Based Benchmark for Provably Evaluating Long-Range Capabilities in Graph Learning
Graph Learning

LRIM provides a physics-based benchmark to provably evaluate long-range capabilities in graph learning, ensuring tasks truly depend on long-range information and offering guarantees about accuracy.

Compactness and Consistency: A Conjoint Framework for Deep Graph Clustering
GNN Graph Learning

Compactness and Consistency proposes a conjoint framework for deep graph clustering that enforces compact representations and consistent clustering while mitigating redundancy and noise in graphs.

GNN Explanations that do not Explain and How to find Them
GNN

This work identifies a critical failure mode of self-explainable GNNs: explanations can be unambiguously unrelated to how the model infers labels. It demonstrates that SE-GNNs can achieve near-optimal predictive risk while producing explanations that do not reflect the actual decision process, revealing gaps and potential misleadingness in explanations. The paper discusses implications and ways to detect such misalignments.

Multi-Domain Transferable Graph Gluing for Building Graph Foundation Models
Graph Learning

They adopt a differential-geometry perspective to merge graph datasets into a unified representation, enabling cross-domain knowledge transfer. This approach tackles how knowledge is integrated across domains, aiming for consistency between pre-training and target domains. By treating graphs as unified geometric objects, it facilitates transfer and provides a principled way to glue domains together.

Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency
Knowledge Graph

Distill-SynthKG introduces a data synthesis workflow for document-level knowledge graphs to improve coverage and efficiency. It builds SynthKG to generate high-quality document-KG pairs through systematic chunking and data synthesis, reducing reliance on expensive LLMs and improving graph consistency. The Distill version streamlines the workflow for scalable, practical use.

Learning Posterior Predictive Distributions for Node Classification from Synthetic Graph Priors
GNN Graph Learning

The work tackles universal node classification by learning posterior predictive distributions from synthetic graph priors, enabling generalization across graphs with diverse properties. It reduces dependence on per-graph labeled data by modeling uncertainty and leveraging priors. This approach aims for broadly applicable node classification across heterogeneous graphs.

G-reasoner: Foundation Models for Unified Reasoning over Graph-structured Knowledge
Knowledge Graph LLM × Graph

G-reasoner proposes foundation models for unified reasoning over graph-structured knowledge, merging retrieval-augmented generation with graph-based reasoning. It addresses fragmentation of external knowledge and weak knowledge-structure modeling in existing RAG systems. The framework enables structured, graph-aware reasoning over knowledge.

Vertically Unified Agents for Graph Retrieval-Augmented Complex Reasoning
Graph Learning

UniGraphRAG is a vertically unified agentic paradigm that jointly connects graph construction and retrieval in GraphRAG. It introduces a seed graph schema to bound automatic extraction, improving robustness to domain shifts and reducing misalignment between construction and retrieval. The integrated design yields tighter coordination across the whole retrieval-and-generation pipeline.

GPS: Directed Acyclic Graph guided Proactive Information Seeking in Large Language Models
LLM × Graph

GPS is a two-stage framework to enhance proactive information seeking in LLMs within RAG. It guides question-asking using a DAG-based reasoning structure embedded in retrieved knowledge, enabling more effective and efficient clarification. The method reduces ambiguity for underspecified queries.

Structure-Aware Graph Hypernetworks for Neural Program Synthesis
Graph Learning

Structure-Aware Graph Hypernetworks address neural program synthesis by conditioning hypernetworks on the target network’s structure while respecting neuron-permutation symmetry. Unlike traditional hypernetworks that emit flat weight vectors, this approach produces structured weights guided by user intent, improving synthesis fidelity.

Improving Long-Range Interactions in Graph Neural Simulators via Hamiltonian Dynamics
GNN

Information-preserving Graph Neural Simulators (IGNS) use Hamiltonian dynamics to better capture long-range interactions and to reduce error accumulation during autoregressive rollout. The approach preserves physical structure while improving the accuracy and stability of graph-based simulations.

FlowSymm: Physics–Aware, Symmetry–Preserving Graph Attention for Network Flow Completion
GNN Graph Learning

FlowSymm introduces a physics-aware, symmetry-preserving graph attention mechanism for network flow completion. It combines a group-action on divergence-free flows, a feature-conditioned graph-attention encoder, and a Tikhonov refinement solved via implicit bilevel optimization to produce minimum-norm, conservation-respecting flow completions.

OWLEYE: ZERO-SHOT LEARNER FOR CROSSDOMAIN GRAPH DATA ANOMALY DETECTION
Graph Learning

OWLEYE is a zero-shot learner for cross-domain graph data anomaly detection, enabling detection on unseen graphs without retraining by leveraging cross-domain representations. This supports scalable anomaly detection across domains.

ProofFlow: A Dependency Graph Approach to Faithful Proof Autoformalization
Graph Theory

ProofFlow offers a dependency-graph approach to faithful autoformalization, building a DAG to map logical dependencies between natural-language statements and formal proofs. This structure-focused pipeline aims to preserve semantic meaning and the logical organization of arguments when translating to formal proof code.

R2PS: Worst-Case Robust Real-Time Pursuit Strategies under Partial Observability
Graph Theory

R2PS develops worst-case robust real-time pursuit strategies under partial observability for graph-based pursuit-evasion games. It analyzes limitations of current RL baselines and proposes strategies that remain robust when pursuers have imperfect information about the evader’s position.

Are we measuring oversmoothing in graph neural networks correctly?
GNN Graph Learning

The paper critiques oversmoothing metrics in GNNs, arguing that standard measures like Dirichlet energy fail to reliably reflect oversmoothing in realistic settings. It discusses limitations and suggests a more reliable evaluation approach, supported by experiments.

Modality-free Graph In-context Alignment
LLM × Graph

Modality-free Graph In-context Alignment (MF-GIA) enables cross-domain alignment for graph foundation models without relying on modality-specific encoders. It uses a pretrained graph encoder to align representations across domains even when raw data or modalities are unavailable, enabling effective in-context learning.

AdS-GNN - a Conformally Equivariant Graph Neural Network
GNN Graph Learning

AdS-GNN builds a conformally equivariant GNN by lifting data to Anti-de Sitter space, leveraging the isometries that correspond to conformal transformations. This enables the network to be equivariant under general conformal transformations.

Can You Hear Me Now? A Benchmark for Long-Range Graph Propagation
GNN Graph Learning

Can You Hear Me Now? introduces ECHO, a benchmark for evaluating long-range propagation in GNNs. It includes three synthetic tasks—single-source shortest paths, node eccentricity, and graph diameter—designed to stress-test very long-range dependencies across diverse graphs.

Evolving Graph Structured Programs for Circuit Generation with Large Language Models
Graph Learning

CircuitEvo uses large language models to iteratively evolve circuit programs toward more compact circuits while preserving functional accuracy. The approach balances circuit size and correctness through successive generations of LLM-guided edits.

On the Expressive Power of GNNs for Boolean Satisfiability
GNN Graph Learning

The paper analyzes the expressive power of GNNs for SAT solving via the Weisfeiler-Leman test, showing that the full WL hierarchy cannot in general distinguish satisfiable from unsatisfiable instances. It discusses the practical implications for using GNNs to solve SAT problems.

Aria: an Agent for Retrieval and Iterative Auto-Formalization via Dependency Graph
Graph Learning

Aria is an agent for retrieval and iterative auto-formalization in Lean, using a two-phase Graph-of-Thought process: recursively decomposing conjectures into a dependency graph and then constructing the formal proof. This mirrors expert reasoning to support conjecture-level formalization.

Robustness in Text-Attributed Graph Learning: Insights, Trade-offs, and New Defenses
GNN Graph Learning

We propose a unified framework to evaluate robustness in text attributed graph TAG learning. The framework compares classical GNNs, robust GNNs and GraphLLMs across ten datasets to systematically study how textual and structural perturbations affect performance under different attack scenarios. The work yields practical insights and outlines new defenses for TAG robustness.

Cooperative Sheaf Neural Networks
GNN Graph Learning

We identify a limitation in cooperative sheaf neural networks where nodes cannot independently decide how they cooperate with neighbors. We introduce a cellular level mechanism that lets nodes choose whether to convey and or gather information, enabling more flexible diffusion. Experiments show gains on heterophilic tasks.

Atomic HINs: Entity-Attribute Duality for Heterogeneous Graph Modeling
Graph Learning

We propose the entity attribute duality for HINs that atomizes attributes as entities with their own relations, while entities can play the role of attributes for others. This duality provides a theoretical foundation for flexible schema design and enables new modeling paradigms for heterogeneous information networks. The framework clarifies how to design HINs to emphasize different data aspects.

Compactness and Consistency: A Conjoint Framework for Deep Graph Clustering
GNN Graph Learning

We present a conjoint framework for deep graph clustering that explicitly enforces both representation compactness and cross cluster consistency. By aligning local similarity with global structure, the method addresses the limitations of message passing and noisy graphs. Experiments demonstrate improved clustering quality and robustness.

CLAUSE: Agentic Neuro-Symbolic Knowledge Graph Reasoning via Dynamic Learnable Context Engineering
Knowledge Graph Graph Learning

CLAUSE is an agentic three agent neuro symbolic framework for knowledge graph reasoning that treats context construction as a sequential decision process. It decides what to expand, which paths to follow or backtrack, what evidence to keep, and when to stop to balance accuracy, latency and provenance. The approach yields more efficient reasoning with predictable costs.

Out-of-Distribution Graph Models Merging
Graph Learning

We study out of distribution graph models merging, aiming to build a generalized model from pre trained graph models trained on different domains. The method uses a graph generation strategy to instantiate a mixture distribution and then merges and fine tunes the backbones via a mixture of experts. This improves cross domain generalization.

The logical expressiveness of topological neural networks
GNN Graph Learning

We analyze the logical expressiveness of topological neural networks. By incorporating higher order relational structures into message passing, TNNs achieve greater expressive power than traditional GNNs in certain logical settings. The work provides theoretical connections between TNNs and logics beyond the Weisfeiler Leman and first order frameworks.

LEAP: Local ECT-Based Learnable Positional Encodings for Graphs
GNN Graph Learning

LEAP introduces local Euler characteristic transform based learnable positional encodings for graphs. We provide a differentiable approximation of ECT and a local variant to supply rich structural priors for message passing networks. Empirical results show improved performance on standard graph benchmarks.

Learning from Historical Activations in Graph Neural Networks
GNN Graph Learning

We propose learning from historical activations for graph neural networks, a pooling scheme that utilizes activations from earlier layers rather than only the last layer. This enhances the final descriptor with richer hierarchical information and improves task performance. Experiments across domains validate the approach.

AtlasKV: Augmenting LLMs with Billion-Scale Knowledge Graphs in 20GB VRAM
Knowledge Graph LLM × Graph

AtlasKV is a parametric knowledge integration method that augments LLMs with billion scale knowledge graphs within 20 GB of VRAM. By embedding the KG into model parameters, it reduces retrieval latency while preserving provenance and enabling scalable open domain reasoning. The approach offers a practical alternative to external retrieval for large scale knowledge.

Multi-Scale Diffusion-Guided Graph Learning with Power-Smoothing Random Walk Contrast for Multi-View Clustering
Graph Learning

We propose multi scale diffusion guided graph learning with power smoothing random walk contrast for multi view clustering. The method alleviates static graph limitations by modeling cross view relationships via diffusion at multiple scales and uses a power smoothing random walk contrast to mitigate false negatives. Results show improved clustering quality and consistency across views.

Gelato: Graph Edit Distance via Autoregressive Neural Combinatorial Optimization
Graph Theory

Gelato introduces graph edit distance via autoregressive neural combinatorial optimization. An autoregressive model learns edit sequences to approximate GED, delivering high quality solutions with improved efficiency over traditional solvers. The approach demonstrates strong performance on standard GED benchmarks.

Revisting Node Affinity Prediction In Temporal Graphs
GNN Graph Learning

We revisit node affinity prediction in temporal graphs and identify the key training and evaluation challenges when applying temporal GNNs to this task. The paper proposes practical remedies and a combined solution that yields improved predictive performance. The results demonstrate better alignment between training objectives and the temporal nature of the data.

Towards Improved Sentence Representations using Token Graphs
Graph Learning

Towards improved sentence representations using token graphs, we present GLOT, a structure aware pooling module built on token graphs derived from a frozen LLM. It learns relations among tokens and then aggregates them to form richer sentence representations. Experiments show improvements on standard sentence tasks.

GARLIC: Graph Attention-based Relational Learning of Multivariate Time Series in Intensive Care
GNN Graph Learning

GARLIC is a graph attention based relational learning model for multivariate time series in intensive care. It imputes missing data using a learnable exponential decay encoder, models inter sensor dependencies via time lagged graphs, and fuses global patterns with cross dimensional sequential attention. The result is accurate and interpretable predictions in ICU settings.

Diverse and Sparse Mixture-of-Experts for Causal Subgraph–Based Out-of-Distribution Graph Learning
Graph Learning

Diverse and Sparse Mixture of Experts for Causal Subgraph Based OOD Graph Learning introduces a MoE framework that models instance level heterogeneous causal subgraphs without relying on restrictive assumptions. A diverse and sparse routing scheme assigns subgraph experts to each instance, improving OOD generalization on graph data.

TetraGT: Tetrahedral Geometry-Driven Explicit Token Interactions with Graph Transformer for Molecular Representation Learning
GNN Graph Learning

TetraGT introduces tetrahedral geometry driven explicit token interactions with a graph transformer for molecular representation learning. It encodes bond angles and torsion angles into the interaction mechanism to capture higher order spatial relations. Experiments on molecular property prediction show gains over traditional graph based representations.

Graph-of-Agents: A Graph-based Framework for Multi-Agent LLM Collaboration
Graph Learning LLM × Graph

Graph of Agents presents a graph based framework for multi agent LLM collaboration. It models LLMs as nodes in a communication graph and uses graph reasoning to select relevant agents, coordinate intra agent communication, and efficiently integrate responses. Experiments show improved task performance and robustness.

BrowseNet: Knowledge Graph-Based Associative Memory for Contextual Information Retrieval
Knowledge Graph Graph Learning

BrowseNet is a knowledge graph based associative memory for contextual information retrieval. It builds a named entity knowledge graph and performs query specific subgraph exploration to retrieve semantically related documents more effectively. The approach enhances retrieval for retrieval augmented generation tasks.

Knowledge Reasoning Language Model: Unifying Knowledge and Language for Inductive Knowledge Graph Reasoning
Knowledge Graph Graph Learning

Knowledge Reasoning Language Model unifies knowledge and language for inductive knowledge graph reasoning. It combines KG context with LLM based reasoning to handle uncertain open domain components and generalizes to unseen entities. The results demonstrate improved inductive KGR performance.

Relational Graph Transformer
GNN Graph Learning

Relational Graph Transformer studies applying graph transformers to relational entity graphs (heterogeneous temporal graphs). It argues that traditional GNNs struggle with complex structural patterns and long-range dependencies in relational data, and that standard positional encodings fail to generalize to these graphs. The paper proposes a relational transformer approach tailored for relational data to address these challenges.

Rapid Training of Hamiltonian Graph Networks Using Random Features
GNN Graph Learning

We show Hamiltonian Graph Networks can be trained rapidly using random features, enabling principled modeling of N-body dynamics with permutation invariance. Gradient-based optimization can be slow for large systems; random features accelerate training without sacrificing physics. Comparisons across 15 optimizers illustrate the efficiency gains.

Flock: A Knowledge Graph Foundation Model via Learning on Random Walks
Knowledge Graph Graph Learning

Flock introduces a Knowledge Graph Foundation Model trained on random walks to address zero-shot link prediction. It enforces equivariance over nodes and relations, enabling transfer to novel graphs with similar structural properties. It also discusses limits of deterministic equivariance that can hinder expressive power when relations are structurally similar but semantically different.

GraphShield: Graph-Theoretic Modeling of Network-Level Dynamics for Robust Jailbreak Detection
Graph Theory Graph Learning

GraphShield proposes a graph-theoretic detector for jailbreak prompts in LLMs by modeling information routing inside the model as token-layer graphs. It is lightweight and model-agnostic, extracting multi-scale structural and semantic features to identify jailbreak signatures, outperforming baselines in extensive experiments.

VoG: Enhancing LLM Reasoning through Stepwise Verification on Knowledge Graphs
Knowledge Graph Graph Learning LLM × Graph

VoG enhances LLM reasoning by enabling stepwise verification on knowledge graphs, using KG-guided checks to correct reasoning steps with evolving evidence. This mitigates hallucinations and factual errors in knowledge-intensive tasks beyond static KG integration.

ST-HHOL: Spatio-Temporal Hierarchical Hypergraph Online Learning for Crime Prediction
Graph Learning

ST-HHOL introduces Spatio-Temporal Hierarchical Hypergraph Online Learning for crime prediction. It builds a hierarchical hypergraph convolution that fuses crime data with heterogeneous contextual factors, and learns online to handle non-stationarity and concept drift.

Exchangeability of GNN Representations with Applications to Graph Retrieval
GNN Graph Learning

The work uncovers a probabilistic symmetry called exchangeability in GNN representations: trained node embeddings from a broad family of GNNs are exchangeable random variables, meaning their joint distribution is invariant to permutation of embedding dimensions. This property enables new approximations for transportation-based graph metrics and downstream tasks.

Probabilistic Kernel Function for Fast Angle Testing
Knowledge Graph Graph Learning

The paper proposes two projection-based probabilistic kernel functions for high-dimensional angle testing, one for angle comparison and one for angle thresholding. Unlike Gaussian random projections, they use reference angles and deterministic projection vectors, with no need for infinite projections; theory and experiments show efficiency and accuracy.

Learning with Dual-level Noisy Correspondence for Multi-modal Entity Alignment
LLM × Graph Graph Learning

We study multi-modal entity alignment with realistic noise at two levels: intra-entity attribute alignments and inter-graph correspondences. We propose methods to handle Dual-level Noisy Correspondence, improving robustness and accuracy of MM-EA under annotation noise.

Native Adaptive Solution Expansion for Diffusion-based Combinatorial Optimization
GNN Graph Learning

Native Adaptive Solution Expansion proposes a diffusion-based combinatorial optimization method that integrates instance-wise global expansion directly, avoiding reliance on external GP predictors. It handles hard constraints efficiently and improves solution quality over prior AE approaches.

Natural Identifiers for Privacy and Data Audits in Large Language Models
GNN Graph Learning

The paper argues that auditing differential privacy in LLMs without retraining is difficult; proposes Natural Identifiers as a practical approach for privacy and data audits, enabling auditing of models post-training and dataset membership tests.

On the Universality and Complexity of GNN for Solving Second-order Cone Programs
GNN Graph Learning

The authors propose a graph representation for conic constraints and prove a universality theorem: there exist GNNs that can approximate the essential properties of SOCPs, enabling universal approximation for this class of convex problems.

Geometric Graph Neural Diffusion for Stable Molecular Dynamics
GNN Graph Learning

GGND introduces Geometric Graph Neural Diffusion to improve stability of MD simulations by accounting for geometric information and diffusion processes, mitigating extrapolation failures when encountered unseen conformations.

Hourglass Persistence for Graphs, Simplices, and Cells
GNN Graph Learning

The work moves beyond inclusion-based persistent homology filtrations and introduces hourglass persistence descriptors for graphs, simplices, and cells, offering topological summaries across scales to capture persistent features beyond traditional filtrations.

Topology of Reasoning: Retrieved Cell Complex-Augmented Generation for Textual Graph Question Answering
LLM × Graph Graph Learning

Retrieved Cell Complex-Augmented Generation augment LLM reasoning for textual graph question answering with cell-complex structures recovered from retrieval. It emphasizes cycles and higher-order topology to enable more robust reasoning over relational loops.

GGBall: Graph Generative Model on Poincaré Ball
Graph Learning

GGBall provides a graph generation model in hyperbolic (Poincaré) space, combining HVQVAE with a Riemannian flow matching prior defined via closed-form geodesics. This enables flow-based priors to model complex latent distributions, while vector quantization helps preserve curvature-aware structure.

Graph Representational Learning: When Does More Expressivity Hurt Generalization?
GNN Graph Learning

The paper studies when more expressivity helps or hurts generalization in GNNs by introducing pseudometrics for graph similarity and deriving bounds that depend on train-test graph distance, model complexity, and data. It clarifies scenarios where increased expressivity may degrade generalization.

Towards Quantifying Long-Range Interactions in Graph Machine Learning: a Large Graph Dataset and a Measurement
Graph Learning

We quantify long-range interactions with a new large-scale City-Networks dataset for transductive graph learning on real-world city road networks, along with a measurement to assess long-range dependencies and evaluation protocols.

Structurally Human, Semantically Biased: Detecting LLM-Generated References with Embeddings and GNNs
GNN Graph Learning

We construct paired citation graphs with human-authored ground-truth and GPT-4o-generated references for 10k focal papers, comparing structure-only features against embeddings to detect LLM-generated references. The study analyzes whether references can be distinguished by graph structure and textual content.

Benefits and Pitfalls of Reinforcement Learning for Language Model Planning: A Theoretical Perspective
Graph Theory

Theoretical perspective on RL for LLM planning shows that SFT may yield spurious, co-occurrence-based solutions while RL mainly improves planning through exploration; highlights exploration's role for better generation.

LinearRAG: Linear Graph Retrieval Augmented Generation on Large-scale Corpora
LLM × Graph Graph Learning

LinearRAG introduces a scalable graph-based retrieval augmentation for large-scale corpora by adopting a linear graph retrieval process that reduces reliance on costly relation extraction. It aims to produce more stable graphs and improve multi-hop reasoning in RAG on unstructured data.

Graph Tokenization for Bridging Graphs and Transformers
Graph Learning

We present a graph tokenization framework that converts graphs into sequential representations compatible with transformers by combining reversible graph serialization with Byte Pair Encoding. Global statistics guide the serialization to better preserve structural information, enabling graph data to be effectively processed by LLMs.

Inductive Reasoning for Temporal Knowledge Graphs with Emerging Entities
Knowledge Graph Graph Learning

This work studies inductive reasoning on Temporal Knowledge Graphs (TKGs) with emerging entities not seen in training, which can constitute about a quarter of entities. It analyzes the failure of closed-world assumptions and proposes methods to generalize to new entities, improving future event prediction.

Graph-Theoretic Intrinsic Reward: Guiding RL with Effective Resistance
Graph Theory

We introduce an intrinsic reward for RL based on Effective Resistance from spectral graph theory to encourage exploration toward configurations linked to successful goals. Theoretical guarantees show faster convergence and reduced variance, yielding more robust policies in sparse-reward environments.

Bridging Input Feature Spaces Towards Graph Foundation Models
Graph Learning

ALL-IN projects node features into a shared random space and builds representations via covariance-based statistics, enabling transfer across datasets with different feature semantics and scales. This simple, theoretically grounded approach removes feature-space misalignment, improving cross-dataset generalization for graph models.

Multi-Scale Hypergraph Meets LLMs: Aligning Large Language Models for Time Series Analysis
LLM × Graph Graph Learning

MSH-LLM introduces a Multi-Scale Hypergraph approach to align LLMs with time-series analysis, using a hyperedge mechanism to capture multi-scale semantic information. This enables better use of LLMs for time-series tasks by modeling complex, hierarchical relations.

Learning with Dual-level Noisy Correspondence for Multi-modal Entity Alignment
Knowledge Graph Graph Learning

We address Multi-modal Entity Alignment under Dual-level Noisy Correspondence (DNC): misalignments in both intra-entity attributes and inter-graph correspondences. The paper analyzes the practical noise and proposes robust methods to handle both levels for more reliable MMKG alignment.

Glance for Context: Learning When to Leverage LLMs for Node-Aware GNN-LLM Fusion
GNN LLM × Graph Graph Learning

We reframe LLM–GNN fusion around nodes where GNNs struggle, showing that GNNs and LLMs excel on different structural patterns. The approach identifies when to leverage LLMs for node-aware fusion to maximize gains.

G-Merging: Graph Models Merging for Parameter-Efficient Multi-Task Knowledge Consolidation
Graph Learning

G-Merging presents a graph model merging framework for combining multiple task-specific GNNs into a single, parameter-efficient model. It addresses the structural heterogeneity of graphs and outperforms naive weight averaging by preserving task-relevant structure.

GNN-as-Judge: Unleashing the Power of LLMs for Graph Few-shot Semi-supervised Learning with GNN Feedback
GNN Graph Learning LLM × Graph

GNN-as-Judge leverages LLMs for graph few-shot semi-supervised learning with GNN feedback. It tackles two challenges: generating reliable pseudo labels when labeled data is scarce, and selecting high-quality pseudo labels through GNN guidance.

AdaSpec: Adaptive Spectrum for Enhanced Node Distinguishability
GNN Graph Learning

AdaSpec analyzes how graph matrices and node features jointly affect node distinguishability, deriving a lower bound based on eigenvalue diversity and feature frequency. It then proposes AdaSpec, an adaptive graph-matrix generator that enhances distinguishability of nodes.

When to use Graphs in RAG: A Comprehensive Analysis for Graph Retrieval-Augmented Generation
Graph Learning LLM × Graph

This paper provides a comprehensive analysis of when GraphRAG yields benefits over vanilla RAG, examining tasks and data scenarios where graph structure improves reasoning and retrieval. It offers guidelines and recommendations for when to employ graph-based retrieval.

Entropy-Guided Dynamic Tokens for Graph-LLM Alignment in Molecular Understanding
LLM × Graph Graph Learning

EDT-Former introduces entropy-guided dynamic tokens for graph–LLM alignment in molecular understanding, replacing fixed-length tokens with dynamic ones that encode stereochemistry and substructural context. It aims to avoid costly LLM fine-tuning and improve efficiency.

Certified Evaluation of Model-Level Explanations for Graph Neural Networks
GNN Graph Learning

The work defines sufficiency risk as a formal criterion to assess whether model-level explanations truly reflect the motifs used by the classifier, bridging the gap beyond class-score-based evaluation.

On the trade-off between expressivity and privacy in graph representation learning
Graph Learning

We study the trade-off between expressivity and privacy in graph representations and propose homomorphism-density vectors as private yet discriminative embeddings. By adding noise proportional to density sensitivity, we achieve private graph representations without sacrificing discrimination.

UrbanGraph: Physics-Informed Spatio-Temporal Dynamic Heterogeneous Graphs for Urban Microclimate Prediction
Graph Learning

UrbanGraph proposes a physics-informed, spatio-temporal dynamic heterogeneous graph for urban microclimate prediction, encoding time-varying causal relations like shading and convection into a dynamic topology. This framework aims to improve urban climate prediction and its link to energy and health.

gLSTM: Mitigating Over-Squashing by Increasing Storage Capacity
GNN Graph Learning

gLSTM revisits over-squashing by increasing storage capacity, defining storage/retrieval capacity as the amount of information that can be stored and retrieved in the network. This helps mitigate information bottlenecks in long-range message passing.

Beyond Simple Graphs: Neural Multi-Objective Routing on Multigraphs
GNN Graph Learning

We propose GNN-based methods for multi-objective routing on multigraphs, enabling multiple parallel edges with distinct attributes between node pairs. The first method autoregressively selects edges to complete a tour; the second integrates alternative routing strategies for multi-objective optimization.

Plan-Answer-Refine-on-Graph: Structured Planning and Self-Refinement for Large Language Model Reasoning on Knowledge Graphs
Knowledge Graph LLM × Graph Graph Learning

Plan-Answer-Refine-on-Graph presents a structured reasoning paradigm for KG-augmented LLMs, mitigating search space truncation bias and entity error amplification by planning, answering, and iterative refinement guided by the graph.

A Function-Centric Graph Neural Network Approach for Predicting Electron Densities
GNN Graph Learning

BOA is an equivariant GNN using the overlap matrix of basis functions to predict ground-state electron density. It yields high accuracy on QM9 and MD density datasets, outperforming baselines.

Neural Message-Passing on Attention Graphs for Hallucination Detection
GNN Graph Learning

CHARM unifies signals from activations and attention into attributed graphs where tokens are nodes and attentional flows are edges. It reframes hallucination detection as a graph learning problem and applies GNNs over these graphs, offering theoretical guarantees and strong empirical results.

H$^3$GNNs: Harmonizing Heterophily and Homophily in GNNs via Self-Supervised Node Encoding
GNN Graph Learning

H3GNNs address the challenge of modeling both heterophily and homophily under self-supervised learning. It introduces Representation Harmonization via Joint Structural Node Encoding to embed nodes into a unified representation that respects diverse neighborhood structures without labels.

GraphPlanner: Graph-Based Agentic Routing for LLMs
Graph Learning

GraphPlanner is a heterogeneous graph-based router for LLMs that generates routing workflows for each query and supports both inductive and transductive inference. It enables task planning, multi-round cooperation among heterogeneous agents, and memory utilization in agentic LLM settings.

Expressive and Invariant Graph Learning via Canonical Tree Cover Neural Networks
Graph Learning

Canonically tree-covered neural networks (CTNNs) offer expressive and invariant graph learning by representing a graph with a canonical spanning tree cover rather than a single canonical order. A small collection of spanning-tree covers preserves structure better and enables more powerful encoders while maintaining isomorphism invariance.

Forest-Based Graph Learning for Semi-Supervised Node Classification
Graph Learning

Forest-Based Graph Learning (FGL) reinterprets message passing as transportation over a forest of spanning trees to enable efficient long-range information propagation. A set of trees captures complementary paths, balancing cost and global receptive field, with theoretical and empirical gains in semi-supervised node classification.

A Hierarchical Circuit Symbolic Discovery Framework for Efficient Logic Optimization
GNN Graph Learning

HIS presents a hierarchical circuit-symbolic discovery framework to learn lightweight, interpretable representations that guide efficient logic optimization. By pruning ineffective subgraphs and discovering compact circuit patterns, it speeds up LO while preserving performance.

Exchangeability of GNN Representations with Applications to Graph Retrieval
GNN Graph Learning

The work reveals that node embeddings produced by a wide range of GNNs are exchangeable random variables, meaning their joint distribution is invariant to permutations of the embedding dimensions. This symmetry enables practical, transport-based retrieval and similarity estimation on graphs.

Is Graph Unlearning Ready for Practice? A Benchmark on Efficiency, Utility, and Forgetting
GNN Graph Learning

This work introduces a principled benchmark to evaluate graph unlearning, focusing on efficiency, utility, and forgetting. It systematically assesses existing unlearning methods and offers guidance on when retraining is preferable and how to trade off performance and cost.

Dynamic Multi-sample Mixup with Gradient Exploration for Open-set Graph Anomaly Detection
GNN Graph Learning

DEMO presents Dynamic Multi-sample Mixup with Gradient Exploration for open-set graph anomaly detection, enabling robust generalization to unseen anomalies with limited labeled data. It uses dynamic mixup to augment samples and gradient exploration to prevent overfitting.

EvA: Evolutionary Attacks on Graphs
GNN Graph Learning

EvA introduces Evolutionary Attacks on Graphs, an evolutionary-based approach that directly optimizes discrete edge perturbations without gradient relaxation. This yields strong, adaptable attacks for discrete graph structures and non-differentiable objectives.

GDGB: A Benchmark for Generative Dynamic Text-Attributed Graph Learning
Graph Learning

GDGB proposes a benchmark for Generative Dynamic Text-Attributed Graph Learning (DyTAG), addressing poor textual quality and the lack of standardized generation tasks and evaluation protocols. It defines tasks and metrics tailored for generative DyTAG research.