← Home

Daily arXiv Papers

Graph Neural Networks · Graph Learning · LLM × Graph

Showing 15 papers for 2026-03-05

Knowledge Graph and Hypergraph Transformers with Repository-Attention and Journey-Based Role Transport
Knowledge Graph Graph Learning

We propose a concise architecture for joint training on sentences and structured data, keeping knowledge and language representations separable. The model treats knowledge graphs and hypergraphs as structured instances with role slots and encodes them into a key-value repository that a language transformer can attend over. Attention is conditioned by journey-based role transport, unifying edge-labeled KG traversal, hyperedge traversal, and sentence structure, organized in dual-stream and hierarchical layer groups.

Towards Improved Sentence Representations using Token Graphs
Graph Learning

We introduce GLOT, a lightweight structure-aware pooling module for sentence representations derived from LLM token outputs. It reframes pooling as relational learning followed by aggregation, preserving relational information captured by self-attention and mitigating signal dilution from naive token pooling. GLOT is designed to be efficient and easy to integrate into existing models.

Graph Negative Feedback Bias Correction Framework for Adaptive Heterophily Modeling
GNN Graph Learning

GNNs excel on homophilic graphs but struggle under heterophily. We provide a detailed analysis of how label autocorrelation under homophily biases learning and mating with message-passing constraints. We then propose a bias-correction framework that adapts to heterophily and mitigates negative feedback effects, improving performance on heterophily-rich graphs.

k-hop Fairness: Addressing Disparities in Graph Link Prediction Beyond First-Order Neighborhoods
GNN Graph Learning

Link prediction on graphs often inherits structural biases like homophily, which can widen disparities across groups. We propose k-hop fairness, extending fairness considerations beyond first-order neighborhoods, to discourage only intra-group links and promote inter-group connections. Our approach analyzes multi-hop signals and provides methods to reduce disparities in LP predictions.

TFWaveFormer: Temporal-Frequency Collaborative Multi-level Wavelet Transformer for Dynamic Link Prediction
Graph Learning

We present TFWaveFormer, a temporal-frequency collaborative multi-level Wavelet Transformer for dynamic link prediction. It combines temporal-frequency analysis with multi-resolution wavelet decomposition to capture complex, multi-scale temporal dynamics in evolving graphs, improving prediction accuracy. The architecture integrates wavelet-based representations with Transformer modeling across time scales.

Beyond Edge Deletion: A Comprehensive Approach to Counterfactual Explanation in Graph Neural Networks
GNN Graph Learning

We introduce XPlore, a comprehensive framework for counterfactual explanations in graph neural networks beyond edge deletion. XPlore identifies minimal, plausible perturbations that flip a model's prediction and presents them as transparent explanations. The approach enhances interpretability and trust in GNN decisions, especially in high-stakes domains.

Toward Reasoning on the Boundary: A Mixup-based Approach for Graph Anomaly Detection
GNN Graph Learning

To address boundary anomalies, we propose ANOMIX, a mixup-based framework for graph anomaly detection. By synthesizing informative hard negatives through mixup, it challenges the model with near-boundary instances and improves reasoning beyond easy negatives. This yields sharper decision boundaries and improved detection of subtle anomalies.

A Geometric Perspective on the Difficulties of Learning GNN-based SAT Solvers
GNN Graph Learning

We study why GNNs struggle on hard SAT instances by examining geometry via graph Ricci curvature. We prove that bipartite graphs derived from random k-SAT formulas are negatively curved, and that curvature declines as constraints tighten, correlating with increased difficulty. This geometric lens explains architectural limitations in learning SAT solvers with GNNs.

Bridging Computational Social Science and Deep Learning: Cultural Dissemination-Inspired Graph Neural Networks
GNN Graph Learning

Bridging computational social science and deep learning, AxelGNN integrates Axelrod's cultural dissemination model into GNNs. The architecture addresses feature oversmoothing, heterogeneous relationships, and monolithic feature aggregation by incorporating cultural dynamics into message passing. The result is more robust, explainable representations for social-network tasks.

Belief-Sim: Towards Belief-Driven Simulation of Demographic Misinformation Susceptibility
Knowledge Graph Graph Learning

Belief-Sim is a framework for simulating demographic misinformation susceptibility using LLMs, constructing belief profiles via psychology-informed taxonomies and survey priors. It studies prompt-based simulations to explore how beliefs shape susceptibility across demographics. The work provides a scaffold for modeling misinformation risk in diverse populations.

GraphMERT: Efficient and Scalable Distillation of Reliable Knowledge Graphs from Unstructured Data
Knowledge Graph Graph Learning

GraphMERT advances scalable distillation of reliable knowledge graphs from unstructured data by combining neurosymbolic AI approaches with distillation. It addresses scalability, interpretability, and trust issues in KG construction, offering an efficient pipeline to extract structured knowledge from raw text and other sources.

Knowledge Graphs are Implicit Reward Models: Path-Derived Signals Enable Compositional Reasoning
Knowledge Graph LLM × Graph

Knowledge Graphs are implicit reward models: path-derived signals enable compositional reasoning. We propose a bottom-up paradigm where models ground reasoning in axiomatic domain facts via knowledge graphs and solve tasks by composing facts. A post-training pipeline with supervised fine-tuning and RL uses KG-derived signals as implicit rewards to guide multi-hop reasoning.

Combating data scarcity in recommendation services: Integrating cognitive types of VARK and neural network technologies (LLM)
Knowledge Graph Graph Learning

Combating data scarcity in recommendations by integrating cognitive profiles (VARK) with LLM-powered knowledge graphs. The hybrid framework tackles cold-start from multiple angles: through semantic analysis and knowledge graph enrichment of content, while incorporating VARK learning preferences. The approach aims to improve recommendations when interaction data is scarce.

OneRanker: Unified Generation and Ranking with One Model in Industrial Advertising Recommendation
Graph Theory Graph Learning

OneRanker is a unified model that handles both generation and ranking for industrial advertising recommendations. It addresses misalignment between objectives and business value, target-agnostic generation, and the disconnect between generation and ranking. By combining generation and ranking in a single model, it aims to improve efficiency and performance.

Scalable Join Inference for Large Context Graphs
Knowledge Graph

We propose a scalable join inference method for large context graphs. Our approach fuses statistical pruning with LLM reasoning to infer joins between entities while reducing invalid joins and duplicates. The hybrid method mirrors human semantic understanding and scales to large context graphs.