Showing 37 papers for 2026-02-24
GLaDiGAtor proposes a language-model-augmented, multi-relational graph learning framework for predicting disease–gene associations. By integrating a language model with GNN-based reasoning over heterogeneous biomedical relations, it improves representation quality and predictive accuracy while offering better interpretability. The results on benchmark datasets demonstrate gains over prior approaches.
From Few-Shot to Zero-Shot: Towards Generalist Graph Anomaly Detection presents a unified approach to graph anomaly detection that generalizes from few-shot to zero-shot across diverse datasets without dataset-specific retraining. It emphasizes cross-dataset generalization, data efficiency, and privacy preservation. Empirical results demonstrate strong zero-shot transfer and reduced training costs.
L2G-Net introduces Local-to-Global Spectral Graph Neural Networks by factorizing the graph Fourier transform into subgraph-based operators and combining them, enabling efficient long-range spectral modeling. This addresses locality limitations of traditional spectral methods and reduces computation compared to full eigendecomposition. Experiments show improved performance and scalability on benchmark graphs.
HEHRGNN proposes a unified embedding model for knowledge graphs with hyperedges and hyper-relational edges. It models n-ary facts natively and learns unified embeddings for nodes, hyperedges, and relations, improving link prediction and downstream tasks. Empirical results on KG benchmarks demonstrate superior performance over binary-relational baselines.
CTS-Bench introduces a benchmark suite to study graph coarsening trade-offs for GNNs applied to Clock Tree Synthesis in IC design. It systematically evaluates memory and runtime costs versus CTS-critical learning objectives, providing guidance on when and how to coarsen graphs for CTS tasks. The benchmark enables more reproducible and scalable CTS modeling.
Training-Free Cross-Architecture Merging for Graph Neural Networks introduces H-GRAMA, a training-free framework to merge heterogeneous GNNs by aligning operators rather than parameters. This cross-architecture merging overcomes topology-dependent misalignment in message passing and enables ensembles of diverse GNNs without retraining, achieving improved performance with low cost.
Spiking Graph Predictive Coding provides reliable out-of-distribution generalization on graphs by combining spiking neural networks with predictive coding in dynamic web contexts. It yields principled uncertainty estimates and robust predictions under distribution shifts. Experiments show improved OOD generalization and trustworthy outputs.
SME-HGT introduces a heterogeneous graph transformer framework to detect high-potential SMEs using exclusively public data. It builds a large heterogeneous graph with company, topic, and government agency nodes and ~99k edges across three relations, enabling accurate prediction of SBIR Phase I to Phase II progression.
VecFormer pursues an efficient and generalizable graph transformer with graph token attention and vector quantization. It reduces attention cost and enhances generalization, especially in out-of-distribution scenarios, while maintaining strong performance on standard benchmarks.
Nazrin introduces atomic tactics for Graph Neural Networks in Lean 4 theorem proving, including a transposing atomization algorithm that converts arbitrary proof expressions into a sequence of atomic tactics. This enables reliable, modular guidance for GNN-assisted proof search in Lean.
Revisiting Graph Neural Networks for Graph-level Tasks provides a taxonomy and a broad empirical study of GNNs on graph-level tasks, covering datasets, tasks, and evaluation practices, and outlines future directions to improve generalizability, reproducibility, and benchmarking fairness.
VillageNet proposes an unsupervised clustering method for broad biomedical applications. It clusters high-dimensional data without prior knowledge of the number of clusters by first partitioning with K-Means into villages, then constructing a weighted network for refinement, aiming for interpretable clustering.
Mamba-Based Graph Convolutional Networks introduces MbaGCN, a graph convolutional architecture inspired by the Mamba paradigm to tackle over-smoothing with selective state-space, enabling deeper networks with preserved discriminability.
Are We Measuring Oversmoothing in Graph Neural Networks Correctly? argues that current oversmoothing metrics have limitations and may misrepresent oversmoothing in realistic depths; it proposes more reliable evaluation approaches to capture when node representations become indistinguishable in practice.
Towards A Universal Graph Structural Encoder proposes a universal encoder that transfers structural knowledge across graph domains, addressing topology heterogeneity and enabling cross-domain transfer of structural representations.
GraphOmni is a comprehensive benchmark framework for evaluating large language models on graph-theoretic tasks, covering diverse graph types, formats, and prompting schemes. Systematic experiments reveal critical interactions among dimensions and show that prompting design and data representations significantly affect performance.
HetGL2R proposes a heterogeneous graph learning framework for ranking road segments by incorporating origin-destination flows and routes in a tripartite graph, with attribute-guided graphs to capture functional similarity, yielding improved ranking of critical segments.
Covariance Density Neural Networks construct a Covariance Density framework by treating the sample covariance as a Graph Shift Operator and interpreting a density matrix as a quasi-Hamiltonian on the space of random variables, boosting modeling of dependencies.
Predicting New Research Directions in Materials Science uses large language models to extract concepts from materials science abstracts and link them to propose near-term and mid-term research directions, demonstrating that LLMs can surface non-obvious connections across literature.
Graph Neural Networks Powered by Encoder Embedding presents a one-hot graph encoder embedding (GEE) as a high-quality, structure-aware initialization for node features, improving convergence and stability. It integrates GEE into standard GNNs to boost node representation learning.
GEDAN proposes learning edit costs for Graph Edit Distance instead of assuming unit costs. By learning operation costs from data, it aims to produce more accurate GED approximations and better reflect topological and functional differences. The paper presents a learning-based approach to estimate costs and evaluate its impact on GED computation.
We analyze why current Temporal Graph Neural Networks struggle with node affinity prediction. We show simple heuristics like Persistent Forecast or Moving Average often outperform state-of-the-art models. We discuss the root challenges and offer practical guidance for training temporal GNNs on node affinity tasks.
We propose a unified framework for Bayesian network structure discovery that leverages LLMs within the data-driven learning process. Unlike prior work that uses LLMs mainly for preprocessing or post hoc reasoning, our approach tightly integrates them to guide structure search with learned signals.
We introduce the Complex-Valued Stuart-Landau Graph Neural Network (SLGNN), inspired by Stuart-Landau oscillator dynamics. The model uses complex-valued representations to capture oscillatory behavior and synchronization, addressing oversmoothing and vanishing gradients in deep GNNs. It provides better long-range and dynamic modeling on graphs.
E2E-GRec enables end-to-end joint training of GNNs and downstream recommender systems to overcome the limitations of a two-stage offline embedding pipeline. By integrating training, it reduces computational overhead and prevents embedding staleness, improving recommendation quality.
We present ECHO, a benchmark to evaluate GNNs' ability to propagate information over long ranges. It includes three synthetic tasks—single-source shortest paths, node eccentricity, and graph diameter—designed to stress-test long-range communication. The benchmark reveals current models' limitations and guides future development.
We introduce LLM-WikiRace, a benchmark to assess LLM planning, reasoning, and world knowledge by navigating Wikipedia hyperlinks to reach a target page. It evaluates various models (open- and closed-source) on planning depth and real-world knowledge navigation.
We propose Retrieved Cell Complex-Augmented Generation to improve textual graph question answering. Unlike traditional 0D node and 1D edge treatments, we incorporate cell complexes to capture cycles and higher-dimensional relations, enabling better reasoning over relational loops.
We study routing-aware explanations for Mixture-of-Experts graph models in malware detection using CFGs. The architecture builds diversity at two levels: layer-wise neighborhood statistics fused by an MLP with a degree reweighting factor rho and pooling choices lambda (mean, std, max), yielding complementary node representations. We show that these explanations improve interpretability without sacrificing accuracy.
We propose Temporal-Aware Heterogeneous Graph Reasoning with Multi-View Fusion for Temporal Question Answering. The framework uses temporal-aware question encoding, multi-hop graph reasoning, and multi-view fusion of language and graph information to better handle temporal constraints and multi-hop reasoning in TKGQA.
The Climate Change Knowledge Graph (CCKG) supports climate services by integrating datasets and simulations across scenarios and configurations. It improves data discovery via mapped metadata and community vocabularies and provides interfaces for efficient retrieval.
KNIGHT is an LLM-based framework that generates MCQ datasets from knowledge graphs. It builds a topic-specific KG, producing a compact summary of entities and relations that can be reused to generate questions with calibrated hardness.
We propose Graph Neural Network Assisted Genetic Algorithm for structural dynamic response and parameter optimization. The hybrid framework uses GNN-based surrogates to speed up GA evaluations over FEM/CFD simulations, enabling efficient design optimization.
A three-stage neuro-symbolic recommendation pipeline for cultural heritage knowledge graphs combines KG embeddings, approximate nearest-neighbour search, and SPARQL-based semantic filtering. The methodology is demonstrated on the JUHMP CHExRISH KG.
Hyper-KGGen introduces a skill-driven framework for high-quality knowledge hypergraph generation to address the scenario gap between generic extractors and domain-specific jargon. It balances structural skeletons and fine-grained details.
We present a context-aware knowledge graph platform for stream processing in industrial IoT. It supports context-driven semantic integration, security, and interpretability to manage heterogeneous, high-velocity data streams in Industry 5.0 contexts.
S3GND proposes a learning-based approach for subgraph similarity search using generalized neighbor difference semantics, which accounts for both keyword-set relationships and structural matching. The method delivers effective similarity scoring on large graphs.